Sunday, December 23, 2018

Do Doctors and Medical Errors Kill More People than Guns?

Recently released stats showing over 40,000 deaths due to firearms in the US this year have led to the usual hackneyed comparisons between those deaths and deaths due to medical errors, the tired refrain something like "Doctors kill more people than that!"  These claims were spreading among gun aficionados on social media last week, with references to this 2016 BMJ editorial by Makary and Michael, from my alma mater Johns Hopkins Bloomberg SPH, claiming that "Medical Error is the Third Leading Cause of Death."  I have been incredulous about this claim when I have encountered it in the past, because it just doesn't jibe with my 20 years of working in these dangerous slaughterhouses we call hospitals.  I have no intention to minimize medical errors - they certainly occur and are highly undesirable - but I think gross overestimates do a disservice too.  Since this keeps coming up, I decided to delve further.

First, just for the record, I'm going to posit that the 40,000 firearms deaths is a reliable figure because they will be listed as homicides and suicides in the "manner of death" section of death certificates, and they're all going to be medical examiner cases.  So I have confidence in this figure.

Contrarily, the Makary paper has no new primary data.  It is simply an extrapolation of existing data and the source is a paper by James in the Journal of Patient Safety in 2013.  (Consider for a moment whether you may have any biases if your career depended upon publishing articles in the Journal of Patient Safety.)  This paper also has no new primary data but relies on data from 4 published studies, two of them not peer-reviewed but Office of the Inspector General (OIG) reports.  I will go through each of these in turn so we can see where these apocalyptic estimates come from.

OIG pilot study from 2008.  This is a random sample of 278 Medicare beneficiaries hospitalized in 2 unspecified and nonrandom counties.  All extrapolations are made from this small sample which has wide confidence intervals because of its small size (Appendix F, Table F1, page 33).  A harm scale is provided on page 3 of the document where the worst category on the letter scale is "I" which is:
"An error occurred that may have contributed to or resulted in patient death."  [Italics added.]
Right there is the crux of the entire matter.  If there is a non-zero probability that an error contributed to death, this "possible contribution" becomes part of the statistic, inflating it by an unknown amount.  But the summary statistics in Appendix F Table F1 do not allow you to see that a minuscule number of the events are serious and "may have contributed to death."  In fact it is a horrible table that is difficult to make sense of.

The best way then, I think, to get a sense for the kinds of errors and adverse events that are being lumped together in the cataclysmic statistics is to look at some of the events, which are tabulated on page 34 in Appendix G, Table G1.  Skimming through the entire table we see that there were 51 adverse events but only three of them were "I" events that "may have contributed to death" and all of them were described as cascade events.  Here is an example:

  • Cascade event following aortic valve replacement characterized by myocardial infarction, respiratory failure, oliguric renal failure and cardiac arrest

This is an unfortunate cascade, but with this description I can't even determine what the alleged error was.  So too with the other two "I" events which were "cascades".  We can however look at some of the "H" events, one grade less severe.  Here are some examples taken in the order in which they appear in Table G1:

  • Acute respiratory failure following PEG tube placement
  • Respiratory stridor following procedure
  • Cascade event in which right coronary artery dissection and right ventricle laceration occurred during coronary angioplasty surgery
  •  Rapid atrial flutter
  • Hypotensive episode during hemodialysis treatment
  • Postoperative hemodynamic instability
These events are also unfortunate, but with the information given I cannot tell how it could be ascertained that errors led to these events, or that the outcome would have been better if the interventions that purportedly led to them had not been undertaken.  For example, a patient receiving a PEG tube is almost certainly in poor condition to begin with.  The only way to prevent completely complications in this population is to not do the procedure at all.  Note also that if there is any complication whatever during this procedure, it is going to be lumped into the adverse events and count as part of the statistics about how dangerous a place the hospital is, rather than as a marker of how sick and fragile many of the patients are.  In sum, the very definition of "adverse events" totally obscures and even obliterates the boundary between events that could not be prevented and did not result from any error and events in which a medical error directly caused patient harm.

OIG 2010 ReportThis is another non-peer reviewed report, this time surveying the records of a random selection of 780 medicare beneficiaries as a sample of the total population.  The format and definitions are almost identical to the previous 2008 report (including the definitions and letter severity scale up to "I"), but with some interesting twists.  The OIG authors now endeavor to determine if some of these events were "preventable".  Here is their first stab at a definition on page 7:
 "Generally speaking, physicians assessed events as preventable when they determined that harm could have been avoided through improved assessments or alternative actions."
We'll ignore the issue of hindsight bias for now.  Appendix E offers an algorithm for judging preventability on a qualitative scale that was suggested as a guide for the reviewers of the records:

We'll focus on Q3 adn Q4 for now.  If appropriate precautions were not taken to prevent the event, it is listed as "Clearly or likely preventable" (hereafter CLP).  Entirely ignored is the efficacy of the preventative measures, if this is even known.  Whether the event was 100% preventable or 1% preventable is given no weight in this schema.  Rather, all potentially preventable events are lumped into this category, thus inflating this statistic.  Same issue with Q4 - if evaluators were unable to determine if precautions were taken, they default to the rarity of the adverse event and if it's rare, it is lumped into CLP, meaning that following this schema, anything rare that happens is considered preventable.  Clearly, the schema was developed in a way that overplays the preventability of events.

To see the actual identified events in this series, we go to Appendix H on page 51.  I count 12 "I" events, so the rate of "I" in this cohort is 12/780=1.5%.  Among them 6 were considered CLP:

  • Cascade event in which delay in care and administration of aspirin to patient with low platelet count led to pulmonary hemorrhage
  • Retroperitoneal hemorrhage secondary to anticoagulant (warfarin) 
  • Hypoglycemic coma secondary to insulin management 
  • Respiratory failure secondary to sedative (benzodiazepine) 
  • Cascade event in which failure to treat systemic inflammatory response syndrome led to acute renal failure and aspiration pneumonia
  • Cascade event in which untreated febrile neutropenia led to septic shock
In addition there were two "I" events were preventability was unable to be determined, both aspiration events:

  • Cascade event in which aspiration led to respiratory failure, acute renal failure, shock, and cardiac arrest
  • Aspiration pneumonia associated with food intake  

Without seeing the charts of these patients and knowing what action was alleged to have led to the adverse events, I can make no further meaningful comment about them.  I only know that based on the definitions provided, these events may have been 100% preventable or 0.1% preventable, or they may just have been rare events that were lumped together under CLP.

Taking a rate of "I" events of 1.5%, and using the CLP rate of 50% and 6.6 million medicare admissions in 2016, and assuming that all of the "I" events led to death, we would get 49,500 CLP deaths in this population.  Now, non-medicare patients are admitted too, but they are not as sick and frail on average so their rate should be lower, but it is still very hard for me to see how we get to Makary's 200,000 annual deaths from medical errors.

Classen et al, Health Affairs, 2011.  This study found a similar rate of "I" events, 8 in a population of 795, so 1%.  Because of the methodology:
"We used the following definition for harm: “unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment, or hospitalization, or that results in death.”33 Because of prior work with Trigger Tools and the belief that ultimately all adverse events may be preventable, we did not attempt to evaluate the preventability or ameliorability (whether harm could have been reduced if a different approach had been taken) of these adverse events."
we have a lumping problem again, and cannot determine to what extent the events were caused by medical errors, or how many among them were preventable, etc.  We are perpetually hamstrung by this definition it would seem, and ideological and unsupported beliefs that "all adverse events may be preventable" are now biasing our endeavor.  I'm afraid that's just claptrap, and I'm disappointed that editors and reviewers let stuff like that into the peer-reviewed literature, with no supporting evidence whatever.

Finally, Landrigan et al, 2010, NEJM.  This was a review of 2341 records from 14 hospitals in North Carolina.  In this cohort of 2341, there were 14 "I" harms, of which 9 (9/2341= 0.38%) were considered preventable based on a Likert scale described in the online appendix.  It uses the same definitions as before, so an "I" event that was classified as preventable does not represent a death caused by a medical error, but rather a "possible death that was possibly preventable."  What that means in terms of an actual 100% preventable death is not at all clear.

Even applying the 0.38% "I" rate and assuming all "I"s are preventable deaths to the 36 million hospitalizations annually in the US, we arrive at 136,000 people who potentially died in potentially preventable ways by doctors and medical errors.  I'm going to wager a guess that because most of the "I"s are not deaths and are not preventable, that the actual number of deaths per year attributable to medical errors is an order of magnitude less than this.  Meaning that the numbers of firearms deaths dwarfs the number of deaths due to medical errors.  Clearly, this is just a guess.  But the entire endeavor is nothing but guesswork.  Because of those who have an interest, in the name of patient safety, in inflating the numbers by making lumpy catch-all definitions that exaggerate the scale of the problem, we just don't have the data that would allow us to make reasonable estimates of the number of deaths per annum due to doctors and medical errors.

Of paramount importance then, is that since deaths and non-deaths are lumped together, and since the degree of preventability is impossible to ascertain with the schema used for these data, the rate of preventable death from medical error is likely to be highly overestimated by both existing data and extrapolations based upon them.

This is not to minimize or apologize for medical errors, which clearly occur, can be egregious, are manytimes unforgivable, and may not be reliably or equitably addressed by the tort system.  But I don't think we should close our eyes and hold our noses and swallow something whose odor tells us it is obviously rotten either.

Finally, the initial comparison between guns and doctors that led to this post is inane.  It matters little if natural causes or doctors or cars or AIDS kills more people than guns.  Guns are a significant "preventable" contributor to US mortality, with victims usually young, and it ought to be dealt with in its own right rather than diminished through apples to oranges comparisons.

1 comment:

  1. And, theres this gem I had seen before but missed: https://www.ncbi.nlm.nih.gov/pubmed/11466119/
    which corroborates my view that many complications are a marker of underlying illness severity

    ReplyDelete

Note: Only a member of this blog may post a comment.