I imagine being in a medical residency program is much like being in prison. I can only imagine because I have never been to prison. But there were some days in residency that I would have gladly traded spots.
In prison, you have
little to no responsibility. Sure, they give you little jobs like folding laundry
or making license plates but lets face it, society doesn’t trust prisoners to
run power plants or safety test cars. So too is it with residency, where staff
doctors trust residents about as much as one would trust a toddler to build an
airplane. The enthusiasm is there but the knowledge is lacking and can you
really blame anyone other than yourself for the consequences when you trust a
kid to build an airplane? This is why staff doctors get so angry at you when you accidentally maim a couple of patients. The licensing board isn't going to let a staff doctor keep their medical license when they trust an unshaven, goof of a resident to do medicine on real patients especially when the result is someone losing a thumb.
In prison, unless you are a
member of a gang, you’re probably another prisoner's bitch. In residency, this
gang is known as the staff physicians and all work flows downwards to the
residents (the bitches) with most of the cash flowing in the opposite
way. Ponzi schemes are central to medical education. I once did a clinic day
where my supervisor sat in a back room and ate Slim Jims. I got to see the
slate of patients, about half of which were coming in for a drivers physical
examination that was not covered by public health insurance. So the patient would hand me $80, and I would dutifully go through a silly
examination that I can almost guarantee has saved in the order of tens of
lives. The patient would leave and I would take the money to the back room and
gently place it on the desk beside the staff physician, much like a sex worker would pay
their pimp. The staff physician would grunt and continue eating Slim Jims
and watching Youtube and I would slink back to the exam room to earn them more
money.
I expect that in prison,
excitement is something that is in high demand. It’s a good day when there’s a shiv
fight or prison riot if only to break up the monotony. But once you get back
onto the mean streets I would expect ones’ lust for shiving and rioting
decreases because these activities are usually parole violations. I remember in
residency that I would be similarly drawn to exciting medical cases. You can only review so many runny noses with a staff doctor before you want to gouge your eyes out with a tongue depressor. An occasional heart attack livens things up. Now, I’d rather see runny noses because it pays the same and there
isn’t the feeling of black dread as you realize that you have to solve a
persons emergent medical problem. It’s a lot easier as a resident to go
through the kabuki of a physical exam, babble about some barely relevant medical
study to your supervisor, and then assume your staff doctor will
solve the problem. At the very least, they can take the blame when things go
wrong.
I’m using a lot of past tense
phrasing here because I am no longer a resident doctor. I am a certified,
excitement-avoiding, “responsible” doctor. Unless you have a truly frightening
life, this should be the scariest thing you have read today. I can prescribe narcotics with the swipe of a pen. Nurses no longer treat me with the disdain that I probably deserve. Residents
and medical students who I have worked with have been calling me “Doctor”
instead of “hey you” and they don’t even have a choice because I’m a new member
of the gang.
As it is now August, I have a full month of experience under my belt and I have yet to be proven to have killed any one as a staff physician. I’m currently batting 1000. Its probably an unsustainable track record but that isn't necessarily a bad thing. In medicine, mistakes are often the best source of teaching and catastrophic mistakes are lessons that a doctor never forgets.
The idea of an acceptable level of death to train doctors is routinely brought up in medical education. One reflection of this is the “July Effect”. This suggests that as new medical students graduate in July and become resident MDs they suddenly have a lot more responsibility without the prerequisite knowledge. They’re toddlers trying to build airplanes. As a result a lot more patients die in the month of July than otherwise would. As a new, independent practitioner with significantly more expensive malpractice insurance, I suspect this same effect must be seen as resident doctors graduate to fully fledged staff doctors.
As it is now August, I have a full month of experience under my belt and I have yet to be proven to have killed any one as a staff physician. I’m currently batting 1000. Its probably an unsustainable track record but that isn't necessarily a bad thing. In medicine, mistakes are often the best source of teaching and catastrophic mistakes are lessons that a doctor never forgets.
The idea of an acceptable level of death to train doctors is routinely brought up in medical education. One reflection of this is the “July Effect”. This suggests that as new medical students graduate in July and become resident MDs they suddenly have a lot more responsibility without the prerequisite knowledge. They’re toddlers trying to build airplanes. As a result a lot more patients die in the month of July than otherwise would. As a new, independent practitioner with significantly more expensive malpractice insurance, I suspect this same effect must be seen as resident doctors graduate to fully fledged staff doctors.
To try and find this effect we have to go to provincially
aggregated data. StatsCanada tracks provincial deaths by month since 1990. To
this we add a July estimate for population to create a deaths per population
measure. The Canadian Medical Association keeps a file on physicians entering
independent practice by school they have graduated from. I assume that the
province that a doctor practices in July is the one in which their medical
school is located (this is a strong assumption but there probably is some correlation
between the two).
Now because this is aggregate data rather than micro data it
is possible it may pick up a number of effects. We could get around this
by being able to link patients to physicians and this is usually what is done
in real studies on the July effect but I don’t have legal access to any data to
do this. From a theoretical perspective there are two major effects that new
doctors might have on the mortality rate. The straight-forward effect is
that more doctors should mean fewer deaths. Patients go to a doctor to improve
their health and so when provinces graduate more doctors, it should result in
fewer deaths. Conversely, if those doctors are so bad at producing health (think leaches and bloodletting), more
doctors might translate into more deaths.
The interaction of these two effects may be a function of
the supply of new doctors in the province. If you have a restricted stock of
doctors, then people can’t get very timely access to a medical opinion and
management. Additional graduated doctors means that at least people are getting the medical opinion of a bad doctor and that may be better than no doctor. Broken clocks are right twice a day.
Once you get to a certain saturation point though this effect might tail off. The marginal benefit of another new grad is basically zero once the supply is large enough to ensure that everyone can see a doctor. Furthermore, large numbers of new physician grads may siphon off patients from the stable stock of older, experienced doctors causing damage. I suspect that Toronto suffers from this problem because you can’t throw a reflex hammer without hitting the office of a new physician.
Once you get to a certain saturation point though this effect might tail off. The marginal benefit of another new grad is basically zero once the supply is large enough to ensure that everyone can see a doctor. Furthermore, large numbers of new physician grads may siphon off patients from the stable stock of older, experienced doctors causing damage. I suspect that Toronto suffers from this problem because you can’t throw a reflex hammer without hitting the office of a new physician.
And this is what the aggregate data shows. There seems to be a quadratic function that
best fits this data. At lower levels, graduating additional residents seems to
lower the mortality rate in a province. Once
you get to a certain point though the relationship reverses itself and a higher
population-adjusted graduation of residents leads to increasing mortality. This
holds even once you control for year of graduation and provincial fixed
effects.
If you believe this result, the implication is twofold.
First, if new doctors really want to produce health, the best place to do this
is in a place that has fewer graduating doctors. This seems like an obvious
result. The more subtle result is that if a new doctor also wants to minimize the
amount of damage that they can do, they should avoid places with an over saturation of new doctors. Practicing in places with higher physician graduation rates seems to kill more people.
That isn't to say that new grads don't kill people even in places that have few doctors - its just that the net result seems to be to save lives. But saving lives is easy. Coming to grips with catastrophe is what is difficult.
Update for the nerds:
So I've been rightly criticized for the above graph not convincingly demonstrating the argument that I've put forward about the July effect. There is obvious clustering of the data by province and there are other obvious omitted variable problems that might invalidate the relationship demonstrated in the figure. I will say that I put forward this argument based upon the above graph as well as a regression analysis which I will describe below. The problem is that whenever you put a regression table in anything people tend to find better things to do than read your blog.
So the basic econometric analysis that I used is that of a fixed effects regression. The regression analysis is described as:
The outcome variable is a difference in the number of deaths in province i in year y in July from the previous year. The variables of interest are the difference in the number of residents graduating in July from the number of residents that graduated the year before in a province and then that variables square. This will allow us to see if a quadratic function fits the data better than a linear regression. Both the number of deaths and the number of residents in a province are population adjusted. There are a battery of dummy variables that control for year fixed effects across provinces. Epsilon is an error term that is distributed iid. Because this is a fixed effects regression the above regression is, under certain assumptions, equivalent to:
In this second regression there are now province fixed effects included. The major assumption for equivalency is that the unobserved differences between provinces do not change in a significant way over the period of June to July during a year. For example, one omitted variable that may influence these results is the overall age of a population. Older people are generally sicker and thus require more doctors and what we may observe in the diagram is that places with more residents are in fact older provinces and these people are already going to die anyway - its not the fact that there are more residents but this other variable that causes the increased mortality. But in fixed effects regressions this effect is essentially scrubbed out if you assume that the age of a population doesn't change appreciably over the month. So while this may be a long run driver of mortality it probably isn't driving the change in deaths over the period of June to July in a given province in a given year. You can make a similar argument for provincial health budgets and the overall health of a population. The regression results are below:
Fixed-effects (within) regression Number of obs = 176
Group variable: province Number of groups = 8
R-sq: within = 0.2434 Obs per group: min = 22
between = 0.0102 avg = 22.0
overall = 0.0058 max = 22
F(23,145) = 2.03
corr(u_i, Xb) = -0.2484 Prob > F = 0.0065
------------------------------------------------------------------------------
deathpop | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
respop | -2.82058 1.027699 -2.74 0.007 -4.851787 -.7893741
respopsq | 1.139464 .6095551 1.87 0.064 -.065297 2.344225
|
year |
1993 | -.1203263 .172442 -0.70 0.486 -.4611509 .2204983
1994 | -.1457825 .1877364 -0.78 0.439 -.5168359 .2252708
1995 | -.1178577 .1872615 -0.63 0.530 -.4879726 .2522571
1996 | -.0881603 .1824971 -0.48 0.630 -.4488584 .2725379
1997 | -.0174301 .187629 -0.09 0.926 -.3882713 .353411
1998 | .0038659 .181337 0.02 0.983 -.3545393 .3622711
1999 | -.0097162 .1856657 -0.05 0.958 -.3766769 .3572445
2000 | -.0441763 .191492 -0.23 0.818 -.4226524 .3342998
2001 | -.0613779 .1980348 -0.31 0.757 -.4527857 .3300299
2002 | .1049069 .1937872 0.54 0.589 -.2781056 .4879193
2003 | -.0123993 .2022462 -0.06 0.951 -.4121307 .3873322
2004 | .2426462 .1909051 1.27 0.206 -.1346701 .6199625
2005 | .1265805 .1867561 0.68 0.499 -.2425353 .4956963
2006 | .0993074 .1881395 0.53 0.598 -.2725427 .4711574
2007 | .2558162 .1855344 1.38 0.170 -.1108851 .6225174
2008 | .1897101 .1791312 1.06 0.291 -.1643356 .5437557
2009 | .1189249 .182879 0.65 0.517 -.2425281 .480378
2010 | .2822041 .1796839 1.57 0.118 -.0729338 .637342
2011 | .3621478 .1765737 2.05 0.042 .013157 .7111386
2012 | .2152357 .1740925 1.24 0.218 -.1288511 .5593226
2013 | .512245 .1743833 2.94 0.004 .1675834 .8569066
|
_cons | 7.268101 .4572194 15.90 0.000 6.364425 8.171777
-------------+----------------------------------------------------------------
sigma_u | .8464953
sigma_e | .3420792
rho | .85961847 (fraction of variance due to u_i)
------------------------------------------------------------------------------
F test that all u_i=0: F(7, 145) = 113.76 Prob > F = 0.0000
That isn't to say that new grads don't kill people even in places that have few doctors - its just that the net result seems to be to save lives. But saving lives is easy. Coming to grips with catastrophe is what is difficult.
*********************************************************
Update for the nerds:
So I've been rightly criticized for the above graph not convincingly demonstrating the argument that I've put forward about the July effect. There is obvious clustering of the data by province and there are other obvious omitted variable problems that might invalidate the relationship demonstrated in the figure. I will say that I put forward this argument based upon the above graph as well as a regression analysis which I will describe below. The problem is that whenever you put a regression table in anything people tend to find better things to do than read your blog.
So the basic econometric analysis that I used is that of a fixed effects regression. The regression analysis is described as:
The outcome variable is a difference in the number of deaths in province i in year y in July from the previous year. The variables of interest are the difference in the number of residents graduating in July from the number of residents that graduated the year before in a province and then that variables square. This will allow us to see if a quadratic function fits the data better than a linear regression. Both the number of deaths and the number of residents in a province are population adjusted. There are a battery of dummy variables that control for year fixed effects across provinces. Epsilon is an error term that is distributed iid. Because this is a fixed effects regression the above regression is, under certain assumptions, equivalent to:
Fixed-effects (within) regression Number of obs = 176
Group variable: province Number of groups = 8
R-sq: within = 0.2434 Obs per group: min = 22
between = 0.0102 avg = 22.0
overall = 0.0058 max = 22
F(23,145) = 2.03
corr(u_i, Xb) = -0.2484 Prob > F = 0.0065
------------------------------------------------------------------------------
deathpop | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
respop | -2.82058 1.027699 -2.74 0.007 -4.851787 -.7893741
respopsq | 1.139464 .6095551 1.87 0.064 -.065297 2.344225
|
year |
1993 | -.1203263 .172442 -0.70 0.486 -.4611509 .2204983
1994 | -.1457825 .1877364 -0.78 0.439 -.5168359 .2252708
1995 | -.1178577 .1872615 -0.63 0.530 -.4879726 .2522571
1996 | -.0881603 .1824971 -0.48 0.630 -.4488584 .2725379
1997 | -.0174301 .187629 -0.09 0.926 -.3882713 .353411
1998 | .0038659 .181337 0.02 0.983 -.3545393 .3622711
1999 | -.0097162 .1856657 -0.05 0.958 -.3766769 .3572445
2000 | -.0441763 .191492 -0.23 0.818 -.4226524 .3342998
2001 | -.0613779 .1980348 -0.31 0.757 -.4527857 .3300299
2002 | .1049069 .1937872 0.54 0.589 -.2781056 .4879193
2003 | -.0123993 .2022462 -0.06 0.951 -.4121307 .3873322
2004 | .2426462 .1909051 1.27 0.206 -.1346701 .6199625
2005 | .1265805 .1867561 0.68 0.499 -.2425353 .4956963
2006 | .0993074 .1881395 0.53 0.598 -.2725427 .4711574
2007 | .2558162 .1855344 1.38 0.170 -.1108851 .6225174
2008 | .1897101 .1791312 1.06 0.291 -.1643356 .5437557
2009 | .1189249 .182879 0.65 0.517 -.2425281 .480378
2010 | .2822041 .1796839 1.57 0.118 -.0729338 .637342
2011 | .3621478 .1765737 2.05 0.042 .013157 .7111386
2012 | .2152357 .1740925 1.24 0.218 -.1288511 .5593226
2013 | .512245 .1743833 2.94 0.004 .1675834 .8569066
|
_cons | 7.268101 .4572194 15.90 0.000 6.364425 8.171777
-------------+----------------------------------------------------------------
sigma_u | .8464953
sigma_e | .3420792
rho | .85961847 (fraction of variance due to u_i)
------------------------------------------------------------------------------
F test that all u_i=0: F(7, 145) = 113.76 Prob > F = 0.0000
This shows that as you increase the number of residents, population adjusted deaths decrease but this occurs at a decreasing rate (the squared term). As a rough control, this relationship doesn't hold by August of each year. The difference in population adjusted deaths between August and June in a year is not statistically affected by the year over year change in residents graduating during a given year in a given province.
There is a major concern that I have with the empirical strategy but I'll leave it you nerds to try and figure out what I'm worried about.