Tuesday, 9 January 2018

Where is the most desirable place to do a medical residency (2017 edition)?

People in medicine have uniforms. This is not in the same way as police or firemen but once you start hanging around health professionals enough you start to notice the patterns. This sociological phenomenon also becomes much more apparent when you visit the other medical tribes.

This first came to my attention when I took a short course on obstetrical management advances for primary healthcare providers. The participants consisted of me and a gaggle of giggling 20-year-old midwifery students. Being a midwife can be a much more difficult job than an obstetrician because midwives often have to deliver in patients homes without the support of hospital facilities and technology. This informs how they view the over-medicalization of birth and the midwives teaching the course did not hesitate to correct my heretical views on fetal monitoring and pain medications. These exchanges would end with one of the midwives giving me, the naive medical resident, a patronizing smile. The midwifery students would then trade knowing looks about this buffoon in their midst.

Anyway, most of the day was spent glaring at midwives and they all seemed to be wearing scarves – the decorative kind that one might find on the owner of an independent bookstore. I took another medical course last month with a similar gaggle of 20-year-old giggly midwifery students who were all  similarly bescarved. Its almost as if the admittance letter to midwifery school comes with one of these scarfs along with a license to condescend. 

This sense of tribal dress-code is not limited to the allied health professions though. Medical doctors have their own uniforms but whereas scarves are fashionable the informal dress-codes among physicians  often provide a security blanket for clothing choices that would otherwise be considered heinous. There seem to be more fanny packs per capita in pediatrics than on a seniors cruise ship. The only time I’ve ever encountered someone in a hospital in suspenders and bow-tie was on an internal medicine ward. Surgeons wear grimaces.

Family doctors like to wear golf shirts so that we can make our 2PM tee times without going home to change. This made it an easy transition to my new uniform which was that of a graduate student. My golf shirt is now paired with a set of ripped jeans and the ensemble is peppered with coffee stains. Its not particularly glamorous.

But what it lacks in glamour, it makes up for in free time.  This allows me to write (or more often than not, procrastinate) and since it’s the point in the year when medical students across Canada are agonizing over their match preferences I might as well write about something that has become an annual tradition at Strange Data. Let’s rank some medical schools.


*********************************************************************************

My goal for these rankings has always been to  base the relative rank of a medical school on the desirability of the school to medical students. I didn’t want to base the rankings on journal citations, academic staff, or anything tangible at all. I wanted to use the wisdom of the crowds to determine ranking. Places that do a better job at making their residents happy for whatever reason would attract more medical students. That may be because they are located in nicer cities, they are more academically impressive, or they treat their residents well. I don’t care how or why they are more desirable, just whether they are more desirable. The only thing that should matter is whether a medical student wants to go to there for residency.

Desirability should be an easy thing to measure in this instance because the residency match process literally consists of medical students ranking their preferred medical schools. The only issue is that the Canadian resident matching service (CARMS) doesn't release any data related to how medical schools are ranked. This means I can’t get any public data on numbers of applications or on ranking preferences of applicants. Instead, what I have to work with is outcome data. All I can see is which medical school a medical student matched to for their residency rather than their ranking preferences. So my measurement for desirability is necessarily a proxy for true desirability. What I argue in this ranking is that a medical school that has high desirability is likely one where there are a large variety of medical students from other medical schools who go there for a residency.

The logic in this is twofold. First, a highly desirable medical school will receive highly desirable candidates and there will be a sorting process where the best and the brightest from other medical schools will be concentrated into these desirable medical schools. This should result in more diverse match cohorts. Second, desirable medical schools will receive a high volume of applications from outside medical schools. Just by sheer numbers, an idiot or two is bound to get through the screening process resulting in a higher number of medical students from outside medical schools. This should also produce a more diverse match cohort. You can imagine what process applied to me when I got into my residency. 

This desirability metric must be adjusted for the fact that larger medical schools can have more diverse matches simply because they can absorb larger numbers of medical students. To adjust for size we imagine that every medical school is equally popular and would take medical students from other medical schools in proportion to the number of overall open spots. Under these utopian conditions, if the University of Toronto has 10% of the total residency spots in Canada, then it should take 10% of the class from the University of Saskatchewan and 10% of the class from UBC and so on. If McGill has 5% of the total residency spots in Canada it should take 5% of the class from UBC and 5% of the class from the University of Toronto and so on. 

The way to measure desirability is then to estimate this utopian scenario and then to see how far real life deviates for each medical school. Deviations above the baseline mean that the medical school is more popular that it would be in a scenario where all medical schools were equally popular. Deviations below the baseline mean that the medical school is less popular than it would be in a scenario where all medical schools were equally popular.

The other important thing to note is that this ranking metric incorporates a measure of the effective spots that a residency class consists of and the effective spots that a medical school contributes to the total pool of match residents. This tries to acknowledge that a medical class consists of leavers and stayers. A medical student who stays at their medical school for residency means one fewer medical student from outside who can get into that class. Often these medical students have a leg up on competitors because they have spent most of their time at their own medical school and are a known quantity. A medical student who stays also means one fewer medical student in the pool of prospective medical student to go to other schools. The effective spots also include how many residency spots at a medical school have gone unmatched.

The statistic for ranking medical schools is then based on the following set of equations. The utopian benchmark case where every medical school is equally popular is defined as:



For a medical school a, the utopian benchmark value is the ratio between the effective spots at a medical school and the total residency spots available at all medical schools. The effective spots at a medical school is the difference between the total residency spots at that medical school (R) and the medical students from medical school a that go to that school for residency (r). Add to this the unmatched residency spots, U, at the university for a total number of effective residency spots at the medical school. The denominator is the total number of available residency spots across the country. This is all of the unmatched spots, plus the open spots at each university - the difference between each residency class and the number of medical students that stay at their home medical school for residency.

The actual case that we observe is described by the following relationship:




is the number of medical students from school who go to school a. Divide this by the total number of medical students available from medical school which is the difference between the total medical students at school and the number of them that stay at school for residency (r).

The ranking statistic for school a is then the difference between the benchmark scenario, B, and the actual real life scenario A



Add up all the of these deviations for all of the medical schools in the country (excluding the medical school in question) and take the average and you get a measure of a medical schools desirability for residency.

*********************************************************************************

For what follows I'm using 2017 match data for the R-1 main residency match. The more desirable a medical school is for residency, the more negative its desirability score will be. This is a result of the difference between the baseline utopian scenario and what we observe in real life. A more negative number means that medical students are going to these universities above what we would expect. These universities are punching above their weight. A university that has a positive desirability score has a baseline utopian scenario score above what we observe in real life. It is a less desirable place to do a residency. A university that has a desirability score of zero means that the utopian scenario is roughly what we observe in real life.



As per previous years what follows is wild and irresponsible speculation on these rankings so take whatever I say with a grain of salt. The first thing to note is that my residency alma mater - the University of Toronto - is in first place.  This is despite the fact that I graduated from there in July which goes to show you how desirable the place is even without a stellar resident such as myself attending. My former medical school, the University of Manitoba is in third last place. 

As per previous years, the more desirable medical schools for residency are in reasonably nice cities like Toronto, Vancouver, and Calgary or at least close to nice cities (McMaster). Most also have pretty decent training and research reputations. Undesirable places seem to be remote (NOSM, Memorial), cold (Manitoba), or predominantly French speaking (Laval, Sherbrooke). As per previous rankings, Quebecois schools suffer disproportionately because much of the rest of the country can't speak medical french ("s'il vous plaĆ®t laissez tomber votre pantalon et penchez-vous") which means most Anglo medical students won't apply for residency at these schools. The University of Montreal is the closest exception to this. It operates in French and is not in Quebec City or Sherbrooke. This means it is in high demand for medical students from other Quebecois schools who can only practice in French. This improves its level of diversity and its desirability metric. 

There are a couple of discrepancies to these general rules. McGill is the obvious outlier, being in Montreal and having an excellent reputation. In many of its residency programs it is highly recommended that applicants know both English and French at McGill which means both Anglo and Quebecois medical students are at a disadvantage. English students also have the opportunity to train anywhere else in Canada where resident pay is much higher so McGill really gets screwed in the rankings. 

Queen's continues to baffle me placing an impressive second in terms of desirability. Queen's retains the lowest proportion of it's medical student body which means that it has a large number of effective residency spots relative to size (allowing for a greater chance for a diverse match cohort). Queen's only retains about 17% of its medical students whereas the average retention is closer to 50%. I originally thought that such a high number of effective spots was driving this result but Queen's also gets penalized because the ranking metric requires it to have a proportionally higher level of diversity so I don't think this is the explanation.

What may be happening with Queen's is that it is a major second choice destination especially in Ontario. It isn't that far from from Toronto, Ottawa, McMaster, or Western and these medical schools send a decent proportion of the their classes to Queen's. It seems likely that being runner-up on everyone's preference list boosts Queen's ranking to make it one of the most desirable medical residency programs in Canada.

The University of Saskatchewan is another notable exception to the above rules. Somehow despite being in the only province more cold and boring than Manitoba (I can make fun of both of those provinces because I'm from Manitoba), it beats out a number of seemingly more attractive medical schools including Ottawa and McGill. Relative to other provinces, Saskatchewan is also on the lower end for resident compensation so I'm really not sure whats going on here.

As I have metrics for the last two match years we can also compare how these rankings have changed. This first panel is how the rankings have changed from 2016 to 2017.



McGill, Montreal and Ottawa all suffered precipitous drops in their rankings. There is probably a reasonable amount of variance in the rankings for the two medical schools in Montreal because of their location and so they top out in the middle of the rankings in good years but when they have bad years they have really bad years because of all of the things discussed above. Your interpretation of why Ottawa changed so much in this ranking depends on whether you think that it is doing significantly worse now or if it was overvalued by the rankings in 2016 and is reverting to it's true ranking. I don't have a good explanation for it though.

Saskatchewan also did a lot better in 2017 than it did in the 2016 match. Part of this may be that it was facing the possibility of having its entire medical school accreditation stripped during the year leading up to the 2016 match. Its hard to understate how embarrassing that would have been and it was likely dragging down its ranking. They apparently did some things to prevent that from happening so kudos to the deans at Saskatchewan for turning that dumpster fire into just a dumpster. 

The change between 2015 and 2017 also reinforces some of the above observations. This second panel shows the rankings for each university in 2015 and 2017.




Results are similar to the previous graph in that Ottawa had a humongous collapse in it's ranking while Saskatchewan was the main mover in the opposite direction. I will note that 2015 was my match year while 2017 was my graduation year. My residency school, Toronto, moved up a rank during my tenure there (coincidence?) while my medical school, Manitoba, dropped two positions during my absence (coincidence?!?!). 

So a word of encouragement to you who are worried about matching to a program that may be  considered "undesirable": those of you doing a specialty will probably be miserable and studying and inside a hospital for the next five years. To a certain extent it doesn't matter where you end up because for you, all roads lead to the library. Those of you doing family medicine have the great pleasure of laughing at those going into specialties. The schadenfreude should compensate for wherever you end up.

I've said this in the past, and I'll say it again - how desirable a place is for a residency to the average medical student may have little bearing on your own ranking. Keep that in mind if you happen to be going through the hell of the match process this year. I know a doctor who loved the chance at living in the snowy waste land of Manitoba because they could buy a house and live well. I know a doctor who decided that they would rather leave the excitement of Toronto because it meant they got to transfer into a residency they preferred. Both are happier for it even though these rankings say they shouldn't be. What other people find desirable is only a small signal of information that should be part of the larger decision you make on your personal medical school ranking. The wisdom of the crowd is a signal that tells you something but not everything. 

As for me, I'm done with residency. I've managed to evade gainful employment by going back to graduate school. I'm willingly taking a massive pay cut in order to write an unsuccessful blog and argue with economists over nothing. That's probably a signal that you shouldn't be taking my advice on any of this.  

Monday, 14 August 2017

Don't get sick in July.


I imagine being in a medical residency program is much like being in prison. I can only imagine because I have never been to prison. But there were some days in residency that I would have gladly traded spots.

In prison, you have little to no responsibility. Sure, they give you little jobs like folding laundry or making license plates but lets face it, society doesn’t trust prisoners to run power plants or safety test cars. So too is it with residency, where staff doctors trust residents about as much as one would trust a toddler to build an airplane. The enthusiasm is there but the knowledge is lacking and can you really blame anyone other than yourself for the consequences when you trust a kid to build an airplane? This is why staff doctors get so angry at you when you accidentally maim a couple of patients. The licensing board isn't going to let a staff doctor keep their medical license when they trust an unshaven, goof of a resident to do medicine on real patients especially when the result is someone losing a thumb.

In prison, unless you are a member of a gang, you’re probably another prisoner's bitch. In residency, this gang is known as the staff physicians and all work flows downwards to the residents (the bitches) with most of the cash flowing in the opposite way. Ponzi schemes are central to medical education. I once did a clinic day where my supervisor sat in a back room and ate Slim Jims. I got to see the slate of patients, about half of which were coming in for a drivers physical examination that was not covered by public health insurance. So the patient would hand me $80, and I would dutifully go through a silly examination that I can almost guarantee has saved in the order of tens of lives. The patient would leave and I would take the money to the back room and gently place it on the desk beside the staff physician, much like a sex worker would pay their pimp. The staff physician would grunt and continue eating Slim Jims and watching Youtube and I would slink back to the exam room to earn them more money.

I expect that in prison, excitement is something that is in high demand. It’s a good day when there’s a shiv fight or prison riot if only to break up the monotony. But once you get back onto the mean streets I would expect ones’ lust for shiving and rioting decreases because these activities are usually parole violations. I remember in residency that I would be similarly drawn to exciting medical cases. You can only review so many runny noses with a staff doctor before you want to gouge your eyes out with a tongue depressor. An occasional heart attack livens things up. Now, I’d rather see runny noses because it pays the same and there isn’t the feeling of black dread as you realize that you have to solve a persons emergent medical problem. It’s a lot easier as a resident to go through the kabuki of a physical exam, babble about some barely relevant medical study to your supervisor, and then assume your staff doctor will solve the problem. At the very least, they can take the blame when things go wrong.

I’m using a lot of past tense phrasing here because I am no longer a resident doctor. I am a certified, excitement-avoiding, “responsible” doctor. Unless you have a truly frightening life, this should be the scariest thing you have read today. I can prescribe narcotics with the swipe of a pen. Nurses no longer treat me with the disdain that I probably deserve. Residents and medical students who I have worked with have been calling me “Doctor” instead of “hey you” and they don’t even have a choice because I’m a new member of the gang.

As it is now August, I have a full month of experience under my belt and I have yet to be proven to have killed any one as a staff physician. I’m currently batting 1000. Its probably an unsustainable track record but that isn't necessarily a bad thing. In medicine, mistakes are often the best source of teaching and catastrophic mistakes are lessons that a doctor never forgets.

The idea of an acceptable level of death to train doctors is routinely brought up in medical education. One reflection of this is the “July Effect”. This suggests that as new medical students graduate in July and become resident MDs they suddenly have a lot more responsibility without the prerequisite knowledge. They’re toddlers trying to build airplanes. As a result a lot more patients die in the month of July than otherwise would. As a new, independent practitioner with significantly more expensive malpractice insurance, I suspect this same effect must be seen as resident doctors graduate to fully fledged staff doctors.

To try and find this effect we have to go to provincially aggregated data. StatsCanada tracks provincial deaths by month since 1990. To this we add a July estimate for population to create a deaths per population measure. The Canadian Medical Association keeps a file on physicians entering independent practice by school they have graduated from. I assume that the province that a doctor practices in July is the one in which their medical school is located (this is a strong assumption but there probably is some correlation between the two).

Now because this is aggregate data rather than micro data it is possible it may pick up a number of effects. We could get around this by being able to link patients to physicians and this is usually what is done in real studies on the July effect but I don’t have legal access to any data to do this. From a theoretical perspective there are two major effects that new doctors might have on the mortality rate. The straight-forward effect is that more doctors should mean fewer deaths. Patients go to a doctor to improve their health and so when provinces graduate more doctors, it should result in fewer deaths. Conversely, if those doctors are so bad at producing health (think leaches and bloodletting), more doctors might translate into more deaths.

The interaction of these two effects may be a function of the supply of new doctors in the province. If you have a restricted stock of doctors, then people can’t get very timely access to a medical opinion and management. Additional graduated doctors means that at least people are getting the medical opinion of a bad doctor and that may be better than no doctor. Broken clocks are right twice a day.

Once you get to a certain saturation point though this effect might tail off. The marginal benefit of another new grad is basically zero once the supply is large enough to ensure that everyone can see a doctor. Furthermore, large numbers of new physician grads may siphon off patients from the stable stock of older, experienced doctors causing damage. I suspect that Toronto suffers from this problem because you can’t throw a reflex hammer without hitting the office of a new physician.

And this is what the aggregate data shows.  There seems to be a quadratic function that best fits this data. At lower levels, graduating additional residents seems to lower the mortality rate in a province. Once you get to a certain point though the relationship reverses itself and a higher population-adjusted graduation of residents leads to increasing mortality. This holds even once you control for year of graduation and provincial fixed effects.  


If you believe this result, the implication is twofold. First, if new doctors really want to produce health, the best place to do this is in a place that has fewer graduating doctors. This seems like an obvious result. The more subtle result is that if a new doctor also wants to minimize the amount of damage that they can do, they should avoid places with an over saturation of new doctors. Practicing in places with higher physician graduation rates seems to kill more people. 

That isn't to say that new grads don't kill people even in places that have few doctors - its just that the net result seems to be to save lives. But saving lives is easy. Coming to grips with catastrophe is what is difficult.


*********************************************************

Update for the nerds:

So I've been rightly criticized for the above graph not convincingly demonstrating the argument that I've put forward about the July effect.  There is obvious clustering of the data by province and there are other obvious omitted variable problems that might invalidate the relationship demonstrated in the figure. I will say that I put forward this argument based upon the above graph as well as a regression analysis which I will describe below. The problem is that whenever you put a regression table in anything people tend to find better things to do than read your blog.

So the basic econometric analysis that I used is that of a fixed effects regression. The regression analysis is described as:


The outcome variable is a difference in the number of deaths in province i in year y in July from the previous year. The variables of interest are the difference in the number of residents graduating in July from the number of residents that graduated the year before in a province and then that variables square. This will allow us to see if a quadratic function fits the data better than a linear regression. Both the number of deaths and the number of residents in a province are population adjusted. There are a battery of dummy variables that control for year fixed effects across provinces. Epsilon is an error term that is distributed iid. Because this is a fixed effects regression the above regression is, under certain assumptions, equivalent to:


In this second regression there are now province fixed effects included. The major assumption for equivalency is that the unobserved differences between provinces do not change in a significant way over the period of June to July during a year. For example, one omitted variable that may influence these results is the overall age of a population. Older people are generally sicker and thus require more doctors and what we may observe in the diagram is that places with more residents are in fact older provinces and these people are already going to die anyway - its not the fact that there are more residents but this other variable that causes the increased mortality. But in fixed effects regressions this effect is essentially scrubbed out if you assume that the age of a population doesn't change appreciably over the month. So while this may be a long run driver of mortality it probably isn't driving the change in deaths over the period of June to July in a given province in a given year. You can make a similar argument for provincial health budgets and the overall health of a population. The regression results are below:

Fixed-effects (within) regression               Number of obs      =       176
Group variable: province                        Number of groups   =         8

R-sq:  within  = 0.2434                         Obs per group: min =        22
       between = 0.0102                                        avg =      22.0
       overall = 0.0058                                        max =        22

                                                F(23,145)          =      2.03
corr(u_i, Xb)  = -0.2484                        Prob > F           =    0.0065

------------------------------------------------------------------------------
    deathpop |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
      respop |   -2.82058   1.027699    -2.74   0.007    -4.851787   -.7893741
    respopsq |   1.139464   .6095551     1.87   0.064     -.065297    2.344225
             |
        year |
       1993  |  -.1203263    .172442    -0.70   0.486    -.4611509    .2204983
       1994  |  -.1457825   .1877364    -0.78   0.439    -.5168359    .2252708
       1995  |  -.1178577   .1872615    -0.63   0.530    -.4879726    .2522571
       1996  |  -.0881603   .1824971    -0.48   0.630    -.4488584    .2725379
       1997  |  -.0174301    .187629    -0.09   0.926    -.3882713     .353411
       1998  |   .0038659    .181337     0.02   0.983    -.3545393    .3622711
       1999  |  -.0097162   .1856657    -0.05   0.958    -.3766769    .3572445
       2000  |  -.0441763    .191492    -0.23   0.818    -.4226524    .3342998
       2001  |  -.0613779   .1980348    -0.31   0.757    -.4527857    .3300299
       2002  |   .1049069   .1937872     0.54   0.589    -.2781056    .4879193
       2003  |  -.0123993   .2022462    -0.06   0.951    -.4121307    .3873322
       2004  |   .2426462   .1909051     1.27   0.206    -.1346701    .6199625
       2005  |   .1265805   .1867561     0.68   0.499    -.2425353    .4956963
       2006  |   .0993074   .1881395     0.53   0.598    -.2725427    .4711574
       2007  |   .2558162   .1855344     1.38   0.170    -.1108851    .6225174
       2008  |   .1897101   .1791312     1.06   0.291    -.1643356    .5437557
       2009  |   .1189249    .182879     0.65   0.517    -.2425281     .480378
       2010  |   .2822041   .1796839     1.57   0.118    -.0729338     .637342
       2011  |   .3621478   .1765737     2.05   0.042      .013157    .7111386
       2012  |   .2152357   .1740925     1.24   0.218    -.1288511    .5593226
       2013  |    .512245   .1743833     2.94   0.004     .1675834    .8569066
             |
       _cons |   7.268101   .4572194    15.90   0.000     6.364425    8.171777
-------------+----------------------------------------------------------------
     sigma_u |   .8464953
     sigma_e |   .3420792
         rho |  .85961847   (fraction of variance due to u_i)
------------------------------------------------------------------------------
F test that all u_i=0:     F(7, 145) =   113.76              Prob > F = 0.0000

This shows that as you increase the number of residents, population adjusted deaths decrease but this occurs at a decreasing rate (the squared term). As a rough control, this relationship doesn't hold by August of each year. The difference in population adjusted deaths between August and June in a year is not statistically affected by the year over year change in residents graduating during a given year in a given province.

There is a major concern that I have with the empirical strategy but I'll leave it you nerds to try and figure out what I'm worried about. 

Tuesday, 8 November 2016

Which presidential candidate will make you drink more?

I was a debater in high school. At great personal expense to my social life I went out and spoke instead of participating in sports or art or music or whatever as an extracurricular activity. For some of you who know me but didn’t know that, I bet some puzzle pieces are falling into place.

My debate teacher was well respected and had a great depth of experience with judging debating. He had a regular story about the worst debate he had ever judged. The resolution, or what was being debated, was “better dead than red”. For those of you who did sports in high school rather than debating, any resolution is essentially meaningless and can be interpreted in any fashion you would like. The above resolution could have been a debate about the merits of communism versus capitalism; it could have been a debate about whether we should force parents to vaccinate their children against their will;  it could have even been about how terrible tomatoes are and how we would be better off if we didn’t eat them.

Instead though, the debaters in the “worst debate of all time” defined the resolution as whether it was better to be aboriginal or dead. The debate team that opposed the resolution settled on the argument that death was horrible but that it was only slightly worse than being aboriginal. The team in favor went with the similarly racist argument that, no, living as an aboriginal person was horrible for reasons including that they were drunk all the time and that they didn’t work. Death was obviously preferable. The second speaker on this team did eventually decide to go with the reasonable argument that a lot of why aboriginal Canadians have had so many social and economic problems have been as a result of systemic oppression. But then he went back to defend his partner’s points on the drunkenness and crime and all of that. It sounds like something out of a piece of Ku Klux Klan literature rather than a high school debate.

It’s hard to compare anything to it. As they were teenagers at the time I guess we have to give them the benefit of the doubt. Nevertheless, it seems like an obtuse and tone-deaf debate to have when you are (most likely) a white male, (most likely) come from a household that has an income in the six figures, and (most likely) are wearing a suit jacket with some prep school crest on it. True to stereotype, most debaters were white-male prep-school educated, knobs when I was debating. I was, of course, the only exception.

The closest that I have come to witnessing a similar debate though (and this is in the crudest sense as it’s difficult to get anywhere close to this “worst debate of all time”) was several weeks ago when I watched the brouhaha in St. Lou-ha-ha, the melee in Missouri, the federal debate fisticuffs between Hillary and Donald. A candidate running for president had to publicly deny sexually assaulting women, disavowed on record something their running mate said, and then threatened to lock up their opponent. And Hillary was there too. It was a debate that wouldn’t have been out of place in a banana republic.

I am under no illusion that I will have any effect on the outcome of this presidential contest. Most of the people who read this blog are Canadian and are therefore ineligible to vote in an American election. I am not going to convince any Americans that I know who are voting for Hillary to switch their allegiance.  I am not going to convince any Americans I know who are voting for Donald because most of these people are stuck scratching their lobotomy scars when I use polysyllabic words like banana.

So I have nothing to contribute to this debate. But what I can do is address the obvious question: who will cause you to drink more if elected president? We have a perfect testbed for this question, which is to see how much the candidates cause us to drink during the final debate which occurred two weeks ago in Las Vegas.

To test this question, I recruited one healthy male volunteer of legal drinking age into this experiment.  In the interest of anonymity, we will call him by his initials, SS. SS has a prodigious brain and an even more prodigious liver and so could spare both in the interest of the scientific method. I also purchased a police-grade breathalyzer which makes this blog post, by far, the most expensive in Strange Data history. I had attempted to run this experiment on the previous debate in St. Louis, but the first breathalyzer I purchased seemed to function as a random number generator rather than an accurate test of blood alcohol content. I had to reschedule until Amazon shipped the new one.

Good old-fashioned American debate drinking games have been a staple of the socially awkward college student since debates were televised. They’re as American as apple pie or failed gun control legislation. Most of the normal drinking game rules are based upon the script that regular American politicians follow. This pabulum forms the basis of all political drinking games because it seems like most American politicians (but really all politicians) can’t get through a speech without speaking all of these terms. I chose the following terms or concepts as the foundation of the drinking game rules:

  • "middle class"
  • "small businesses"
  • A promise of tax cuts 
  • The “I’m a regular joe, not a politician” story 
  • Accusing an opponent or the establishment of being “Washington insiders"
  • “The children are our future” anecdote

These would be the terms that I would expect from a regular election cycle where the candidates weren’t universally hated and scandal plagued. This is not the case this year. So to add to these terms were several specific to the discussion (I use that term loosely) that has been occurring around the candidates this year. These “wildcard” terms include:

  • Any affair/sexual impropriety talks
  • Emails/Wikileaks talks
  • Stamina/health/fitness talk
  • China – they’re taking our jobs
  • Mexico or how to build a free wall
  • Tax returns

Mentioning any of these twelve concepts earned one drink (which I measured in a one-ounce shot glass). The drink would be earned on a per-thought basis. For example, a candidate who said “middle class” repeatedly during the same line of argument would earn themselves one drink. An opposing candidate who replied to that line of argument though would also earn themselves a drink. A candidate who mentioned one of these terms, then switched topics, and then switched back to the term in question would earn themselves two drinks.

I chose these twelve terms for two major reasons: they’re fairly objective and they’re discrete, so it’s easy to measure drinks. In previous elections I have used drinking rules like, “if the candidates speak over the moderator, drink until they stop”, which makes it difficult to measure alcohol intake. I have also used drinking rules like “if the candidate tells a lie then drink”, which is a subjective rule at least in real time (and wouldn’t you be drinking constantly?). But to the above objective drinking rules I added one that is more subjective: if a candidate says something that makes my jaw drop, finish a drink. To accurately gauge alcohol intake for SS I used a measuring cup to record these finished drinks.

As this is a blog that’s nominally about health and medicine, I guess I should talk about something health-related. Your liver is an organ that loves you dearly. It loves you so much that it will help clear any alcohol in your blood out of your body so that this alcohol does not kill you. I once heard from an intensive care doctor that all you really need to live is a small bit of brain and your liver. Everything else can kind of be replaced and although you will have a poor and uncomfortable life, you will live.

The way that alcohol enters your body (usually) to get to your liver is by drinking it. It goes in your face. It then is dumped by the esophagus into the stomach where about 20% of it is absorbed via the stomach lining. The stomach then dumps it into the initial part of the small intestine – the duodenum – where the remaining 80% is absorbed. Any food in the stomach delays gastric emptying and uptake of alcohol, which explains why people do not feel as intoxicated as quickly when they drink and eat.

Alcohol gets absorbed into the blood and then is transported via the portal blood system directly to the liver where that organ gets first crack at ridding you of it. This is called the first pass system and it is why orally ingested drugs (that are cleared by the liver) are usually less potent than those given directly by intravenous. This first pass system occurs both in the liver and at the interface between the gastric mucosa and circulation system. The liver itself continually purifies blood flowing through it so that any alcohol that does not get cleared through first pass will be purified on further passes through the portal system.

From the liver’s circulatory system alcohol disseminates into the bloodstream. It then exerts its effects on the brain and gives the feelings of intoxication. Alcohol continues to circulate throughout the blood and is processed by the liver every time it passes through the portal vein system. The biochemical mechanism for this process in the liver is a conversion by the enzyme alcohol dehydrogenase into acetic acid and then into carbon dioxide and water. It is during the processing by alcohol dehydrogenase that a coenzyme is required; this rate limits the conversion so that at sufficiently high quantities of alcohol in the blood, the liver will metabolize and excrete alcohol at a constant rate. Once you hit saturation you can theoretically predict blood alcohol content (BAC) and the excretion of alcohol by the following equation:



Blood alcohol content (BAC) is then a factor of the volume of alcohol consumed (v), the strength of the alcohol (z), the proportion of alcohol absorbed (a – which is presumed to be one under “normal” experimental situations), alcohol density (d – equal to 0.789g/ml), mass of the subject (m), and a conversion factor that is a product of the water content of a person’s tissues (r). The excretion rate is a factor of the time since consuming alcohol (t) and a coefficient that has been estimated from previous experiments (B – about equal to 13.3 g/dL/hr).

As we have an ability to predict BAC, we can then estimate the tolerance that SS has to alcohol. There are several ways to classify tolerance in alcohol ingestion. One way in particular classification separates how the alcohol acts on the body and how the body acts on alcohol. Functional tolerance relates to how sensitive the brain is to the intoxicating effects of alcohol – it is how the alcohol acts on the body. Increasing functional tolerance is why alcoholics can drink quarts of vodka and not slur their words. This is a difficult thing to measure accurately as BACs will still be high despite behaviour that suggests otherwise.

What we can measure a little bit more accurately though is dispositional tolerance. Dispositional tolerance is how the body deals with alcohol. A liver that is chronically exposed to alcohol can clear the same amount of alcohol at a quicker pace.  Since we have a way to estimate the theoretical BAC of SS and we will also have empirical estimates of BAC through the breathalyzer we can then check his relative dispositional tolerance to alcohol. If the log of the ratio of empirical BAC to predicted BAC is above zero then SS has higher dispositional tolerance than the average person. If the log of the ratio of empirical BAC to predicted BAC is below zero then SS has a lower dispositional tolerance and is a lightweight.  

So some logistics. BAC breathalyzer measurements were taken at six minute blocks and all drinks taken during a block were attributed to the beginning of that block. The debate itself lasted about 90 minutes but the total time we assessed the BAC for SS was 114 minutes. A scribe/referee counted drinks and I went back after the debate with a transcript and checked the count to ensure accuracy. SS fasted prior to the session and did not otherwise consume alcohol during the experiment. In honor of America we chose an American beer that had an alcohol content of 4.8%. It was the weakest beer in my fridge.

So first of all, what were the subjects of discussion (again – using that term loosely) that earned the most drinks during the debate? The following graph shows that, on a per-drink basis, Clinton and Trump talked about China and Mexico a lot. Tax cuts were also a big subject of discussion during the economic portion of the debate. True to form, Clinton made SS drink on the stereotypical politician talking points like “middle class” or “small businesses”. True to form, Trump mainly made SS drink during the wildcard portion of the debate while discussing the Wikileaks scandal and making vaguely racial remarks. He was one mad hombre.



On the topic of jaw-drops, there were four in total during this debate. Clinton called Trump “Putin’s puppet”, which made Trump’s face look like it was about to burst from all the blood flowing to it. One for Clinton. Trump accused Hillary Clinton and Barack Obama of personally organizing a riot in Chicago during one of his rallies. Then he refused to say that he would accept the results of the election. Two jaw-drops for Trump. The moderator, Chris Wallace, also got a jaw-drop for casually mentioning that the candidates (because they hate each other so much) could not agree to closing arguments at the end of the debate. Kudos to Chris for getting on the board.

On a who-will-make-you-drink-more basis, we have conflicting results. In terms of number of drinks, Clinton won. In terms of volume of drinks, Trump won, mostly on the back of his jaw-drop drinks.  Like the tortoise and the hare, Clinton was slow and steady, and Trump was all over the place. This tie-of-sorts seems consistent with their leadership styles.



Over the totality of the debate, SS consumed about 1700ml of beer. His peak BAC occurred at the 96th minute at the end of the debate, thanks to Chris Wallace and his loud mouth. This peak level was 0.102 g/dl.  The other BAC peak was a 0.099 g/dl reading in the 48th minute after a flurry of affair and sexual impropriety talk and then Trump accusing a sitting US president of organizing a riot.



It turns out that, at least relative to the Widmark prediction, SS clears alcohol well at high levels but is a lightweight at lower levels of consumption. As the debate continued into the 50th minute and the volume of beer increased dramatically the ratio of measured to predicted BAC dipped below zero. His liver was lazy at the beginning of the debate but started to kick into higher gear as the night wore on and more outlandish things were said.



So I guess we’ll give this contest to Trump as he did make SS drink the highest volume of beer. It’s a pyrrhic victory for the Donald but at least it’s a victory. Winston Churchill once said that you can always trust the Americans to do the right thing after exhausting all of the alternatives. I never thought one of those alternatives would be a presidential candidate who was a woman-groping, race-baiting, anti-intellectual, but here we are.  I wish I had more of a profound ending to this post but it’s hard to be profound when you have to affirm that these attributes are disqualifying for a leader. But the Americans will do the right thing. That, or my liver will be working a lot harder over the next four years.

Friday, 26 August 2016

Where is the most desirable place to do a medical residency (2016 edition)?


There is a cognitive dissonance between selecting students for admission and then producing doctors. The traits that make you a slam dunk admission were no longer all that important once you got into the program.  You play violin in the symphony? You’re in! But you should really stop playing and study more. You’re an Olympic gymnast? Here’s your white coat! But there’s no time to practice, you should study more. You volunteer at an orphanage for victims of arson? Welcome to medical school! But you should really stop helping them and study more.

To some extent, medical schools don’t really care about what extra-curricular activities you salt your resume with as long as you breach a certain threshold. The signal they get from a padded resume is that you’ve been busy and you can remain busy and this is good because medical school is busy. Rather than having much of an interest in students with hobbies, they’re selecting for students who can tolerate a heavy work load. This theory seems far more consistent with what then happens in medical school, which as one staff physician put to me, was to “make all of you unique little snowflakes into snow”.

This homogenization process would manifest itself regularly in pre-rotation orientation sessions when the attending physicians would inevitably ask the group of senior medical students to tell the group about “one thing you like to do in your free time”. A common response was “I like to travel”, which I think was a reflection of how few extra-curriculars most of us had at that point and also how most pleasurable thoughts revolved around getting the hell away from medical school. The attending physician would then say some generic pleasantry about traveling and free-time and, without a trace of irony, pass out the call-schedule. Welcome to internal medicine/pediatrics/surgery/obstetrics, you’re working for the next fourteen days straight!

This lack of intellectual diversity became more of a problem when it came to the residency match. Everyone had gotten into medical school based on their grades (which were uniformly good in undergrad) and their extra-curriculars (which no one had any more because of medical school). So almost everyone was a generic medical student to the higher-ups. Medical school in Canada is also unique in that most residency programs don't care what your grades were in medical school as long as you pass, so there was little to distinguish medical students on this basis. I was an enormous beneficiary of this policy so I'm not sure I have a whole lot of grounds to malign it, but it did induce some perverse incentives.

Specifically, it induced medical students to "work together as part of the medical team". This may sound like a good thing when one of the deans says it, but it is more colloquially known as ass-kissing. Marks and special skills mattered far less and so it became more about who liked you when you worked for them. Since rotations were only several weeks at a time, intense, adoring flattery was the best way to snag a good reference or to gain favor with whatever program director. As it was another thing I wasn't very good at in medical school, I found the process extremely exhausting. It also gave a lot of power to people higher up the food chain in medicine. Medicine is a profession known for its abuses of learners, and sometimes you can see why when there are imbalances like the ones inherent in the match process.

Anyway, this culminated in the fourth year of medical school during the residency match, where universities across the country listed their preferred medical students. If you made the cut, you got into your program and you got the privilege of taking crap from senior physicians for a paycheque. About a year ago I had matched and they (probably) couldn’t fire me because I had a contract, so I decided to turn the tables on the medical schools and rank them. 
 
As I know too many people in medicine, this was my most popular blog to date. Not even the sex one beat it, despite it being WAY more interesting, better-written and full of dirty word-play. To capitalize on the upcoming match process and boost traffic to my blog, I am releasing a new and improved medical school rank.

*********************************************************************************

My ultimate goal last year was to base the relative rank of a medical school on the desirability of the school to medical students. I didn’t want to base the rankings on journal citations, academic staff, or anything tangible at all. I wanted to use the wisdom of the crowds to determine ranking. Places that do a better job at making their residents happy for whatever reason would attract more medical students. That may be because they are located in nicer cities, they are more academically impressive, or they treat their residents well. I don’t care how or why they are more desirable, just whether they are more desirable.

It's a little more difficult to assess desirability in a system where there is a cap on demand for spots. There are only so many residency spots to go around and when they fill up that's it. In a well-functioning market, desirability is something that can be revealed (arguably) by volume in a short-term sense and in price in a longer-term sense. This doesn't work with residency spots and so my logic last year was to see what universities seemed to attract the medical students from across the country. The argument for revealing desirability was that a desirable school is universally desirable and should attract a high volume of applicants as well as the best applicants from other medical schools. Undesirable schools would not receive as diverse an array of applicants and so this would show up in a limited number of outside medical students going to those schools.

Given the data available, I still think that this is the best way to rank medical schools, but the methodology I used was incomplete. This previous statistic for a medical school was essentially the average percentage of other medical school classes that went to the medical school in question. Although I hinted at the problems with this, I couldn’t come up with a solution and so I really didn’t do anything about it. I was also going to the University of Toronto and it was number one, so it was a result that coincided with my prior expectations.

The major problem had to do with the size of the accepting medical school as compared to the size of the donating medical school. When you have a large pool to accept into, you can take a large number of people from other medical classes. Since the University of Toronto has some 300-odd residency spots, it can easily accept 10% of the medical class from Memorial University. But Memorial is a small medical school and its residency class is similarly small. Memorial University couldn’t really take 10% of the University of Toronto’s medical class even if it was the most desirable medical school in the country. This puts smaller medical schools at a disadvantage relative to larger medical schools.

This year I tweaked the methodology to try and account for this. My small insight into this was to imagine what the allocation of medical students would look like if every medical school in the country was equally desirable. Medical students would be indifferent in going to either Memorial or Toronto or any other medical school for a residency. In this utopian scenario they would all apply to all of the universities and get accepted at roughly equal rates to each university (assuming a roughly similar distribution of talent across the medical classes). The result would be that each medical class would allocate a percentage of their medical students in proportion to the size of the residency class at the accepting medical school. Under these conditions, if the University of Toronto has a residency class that comprises 10% of the total spots across the country, then it should take 10% of the medical class from Memorial (and 10% from UBC, and 10% from Western etc.)  If Memorial has a residency class that comprises 3% of the total residency spots across the country, then it should take 3% of the medical students from Toronto (and 3% from UBC and so on).

The way to measure desirability is then to estimate this baseline scenario and then to see how far real life deviates for each medical school. Deviations above the baseline mean that the medical school is more popular that it would be in a scenario where all medical schools were equally popular. Deviations below the baseline mean that the medical school is less popular than it would be in a scenario where all medical schools were equally popular.

The other little tweak to the model is to get a measure of the effective spots that a residency class consists of and that a medical school contributes to the total pool of medical students. This tries to acknowledge that a medical class consists of leavers and stayers. A medical student who stays at their medical school for residency means one fewer medical student from outside who can get into that class. A medical student who stays also means one fewer medical student in the pool of prospective medical student to go to other schools. I also added the unmatched spots into the pool of total residency spots available which was something I did not do last year.

The statistic for ranking the medical schools is then based on the following set of equations. The utopian benchmark case where every medical school is equally popular is defined as:

 
For a medical school a, the utopian benchmark value B is the ratio between the effective spots at a medical school and the total residency spots available at all medical schools. The effective spots at a medical school is the difference between the total residency spots at that medical school (R) and the medical students from medical school a that go to that school for residency (r). Add to this the unmatched residency spots, U, at the university for a total number of effective residency spots at the medical school. The denominator is the total number of available residency spots across the country. This is all of the unmatched spots, plus the open spots at each university - the difference between each residency class and the number of medical students that stay at their home medical school for residency.

The actual case that we observe is described by the following relationship:


r is the number of medical students from school b who go to school a. Divide this by the total number of medical students available from medical school b which is the difference between the total medical students at school b and the number of them that stay at school b for residency (r).

The ranking statistic for school a is then the difference between the benchmark scenario, B, and the actual real life scenario A



Add up all the of these deviations for all of the medical schools in the country (excluding the medical school in question) and take the average and you get a measure of a medical schools desirability for residency.

***********************************************************************************

So onto the results and some interpretation is warranted here. First, the more desirable a medical school is for residency, the more negative its desirability score will be. This is a result of the difference between the baseline utopian scenario and what we observe in real life. A more negative number means that medical students are going to these universities above what we would expect. These universities are punching above their weight. A university that has a positive desirability score has a baseline utopian scenario score above what we observe in real life. It is a less desirable place to do a residency. A university that has a desirability score of zero means that the utopian scenario is roughly what we observe in real life.



Using this new methodology for the 2016 match (and 2016 match data from CaRMS), UBC places first and Laval places last. My own university, Toronto places second while my old medical school, Manitoba, places second last. Now onto the speculation. Why do these rankings look the way they do?

There are a number of obvious reasons as to why people go to places for residency that probably influence these rankings. First is the location. Like the 2014 rankings, the top schools are located in cities and towns with a reputation for being more exciting than the locations of other universities. They also are known for being universities with decent research and clinical reputations. Conversely the prairie universities and the more remote universities (NOSM, Memorial) are difficult to get to, cold, and are not known for their night life. They also, probably because of scaling issues, have less robust research reputations. This makes it difficult to attract and keep residents in these places.

As per last year, Quebec schools have a hard time attracting residents from across the country likely because of the language barrier. Montreal is the one exception here because it seems to be able to attract residents mostly from other Quebecois medical schools. McGill, as the one major English medical school in Quebec, ranks pretty low because you still need to learn French to go there and pay in Quebec is (still) shockingly bad for residents. The difference between a first year resident's salary in Quebec and in Ontario is over $10,000 and English speaking residents have better alternative options outside of Quebec. As a result, McGill suffers disproportionately in the rankings. Residents who speak exclusively French on the other hand, don't have an option to leave the province, which is why Montreal does so well.

Now, how have these rankings changed over the last year? I went back and repeated the same exercise on 2015 data (which was my match year) to see how these rankings have fluctuated.


Since last year there have been a couple of major changes. Memorial and Alberta have both dropped in these rankings. I suspect this may be due to the ongoing fallout from the low price of oil. Previously, the provincial governments could throw money at their medical schools and residents could count on a decent salary over the long run. Both provincial governments have indicated a need to reign in expenditures. Health is one of the bigger portfolios and thus a target for cuts. I suspect residents are pricing this fact in for this year's rankings. The exception to this is Calgary (also being in the province of Alberta) which had a largely stable ranking. The University of Calgary has one significant thing going for it, which is that it is neither located in Edmonton or St. John's. Nevertheless, its ranking may start to collapse over the next couple of years unless oil prices rebound.

Montreal, Queen's, and to a smaller extent, McGill, climbed in this year's rankings. Neither Montreal nor McGill did anything to warrant the bump in rankings as their desirability scores stayed almost constant. They are beneficiaries of other universities falling in desirability. Queen's had a true measurable increase in its desirability score for reasons that escape me.

So I leave you with this little disclaimer. There is a certain wisdom in crowds revealing desirability, but it's wrong to say that this ranking suggests that a medical school will be a better place for your own residency. A lot of decisions go into picking a medical school for residency and the wisdom of the crowds is only one small input into that decision process. A plurality of medical students find that their best option is their own medical school. For every medical school other than Queen's and Ottawa, over 35% of the medical class stayed for residency and Ottawa matched about 30% of its class to its residency program. Queen's, for the third year running, matched the lowest percentage of its own medical students to its residency program at a dismal 14.5%. This in itself should give you some pause about the accuracy of these rankings. If medical students who have been at Queen's for four years have a near-universal interest in leaving, you have to wonder about the desirability of that residency program no matter what these rankings say.

The last point that I'll make is that it really doesn't matter what medical school you match to for a number of reasons. Thanks to accreditation, teaching and learning is largely standardized across programs. Each university has its own research strengths and agenda, but if you want to make it a part of your career, they will bend over backwards to accommodate you. Medical schools love research even if most of what is done is basically useless.

Finally, if you're worried about a medical school based on its location or quality of life, just remember this: for those of you going into a specialty program, it doesn't really matter because you'll all be looking at the inside of a hospital for the next five years anyway. And if you're going into family medicine and you get stuck at an undesirable medical school, you can just move away after two wasted years. Don't sweat it!

But life is funny. I didn't get into any one of the top three programs I applied to and looking back this was a blessing in disguise. If I had gotten my top program I would be in a different city and in a specialty program that would be entirely unsuited to my personality. Instead I got into a program that fits me perfectly. I didn't get the program that I thought I desired the most and it worked out for the best. It'll probably work out for you too.