Tuesday, April 1, 2014

Absolute Confusion: How Researchers Mislead the Public with Relative Risk

This article in Sunday's New York Times about gauging the risk of autism highlights an important confusion in the appraisal of evidence from clinical trials and epidemiological studies that appears to be shared by laypersons, researchers, and practitioners alike:  we focus on relative risks when we should be concerned with absolute risks.

The rational decision maker, when evaluating a risk or a benefit, is concerned with the absolute magnitude of that risk or benefit.  A proportional change from an arbitrary baseline (a relative risk) is irrelevant.  Here's an example that should bring this into keen focus.

If you are shopping and you find a 50% off sale, that's a great sale.  Unless you're shopping for socks.  At $0.99 a pair, you save $0.50 with that massive discount.  Alternatively, if you come across a 3% sale, but it's at the Audi dealership, that paltry discount can save you $900 on a $30,000 Audi A4.   Which discount should you spend the day pursuing?  The discount rate mathematically obscures the value of the savings.  If we framed the problem in terms of absolute savings, we would be better consumers.  But retailers know that saying "50% OFF!" attracts more attention than "$0.50 OFF!" in the sock department.  Likewise, car salesmen know that writing "$1000 BELOW INVOICE!" on the windshield looks a lot more attractive than "3% BELOW INVOICE!"

The rational shopper makes no such distinctions - s/he is looking for big absolute savings rather than big proportional discounts.  If s/he has limited time and is going to invest it in the goal of saving the most money, s/he will pursue opportunities with large absolute savings, rather than large proportional discounts.  But there are abundant data from behavioral economics that show that we are not rational shoppers - to our peril.

Likewise with medical and epidemiological data.  If you have cancer, and there is a chemotherapy drug that cuts your absolute risk of dying by 10%, it matters little whether it reduces it from 50% to 40% (with a relative risk reduction of 20%) or whether it reduces it from 20% to 10% (with a relative risk reduction of 50%) - 10% is 10% (just like $10 is $10), regardless of the baseline from which we start.  Note also that as we approach zero risk, small absolute risk reductions can produce large relative risk reductions.  Conversely, non-trivial absolute risk reductions on the order of 5% can appear piddling when the baseline risk is 75%.

Similarly, relative risks can scare us more than there is cause to be scared, as in the case of autism.  The risk of Down Syndrome in offspring increases by five-fold as maternal age increases from 30 to 40 years.  A 500% increase!  A relative risk of 5!  But the absolute risk goes from 0.0005 to 0.0025 - an absolute risk increase of 0.002, or two tenths of a percent.  When most people (me included) think of 0.2%, we don't get too worked up.  (Maybe we should.)

So why do we continue to promulgate relative risks?  Here are some ideas:

  1. Like the sock retailer and the car salesman, researchers (especially those employed by for-profit pharmaceutical companies) want to promote their findings and their wares.  It helps get grants, leads to notoriety, increases press coverage, sells drugs, and enhances profits.  So if the relative number is greater than the absolute number (and it always is), that's the one they're going to report.  Even if it inflates a minuscule benefit because of a low baseline risk rate.
  2. In some epidemiological studies (e.g. case control, meta-analysis) an absolute risk reduction is not possible to ascertain and odds ratios (a surrogate for relative risk) are the only way to report the results.
  3. If you want to compare the "prowess" of therapies across diseases with different outcome rates, (perhaps as a surrogate for the prowess of the researchers who investigate them), it seems more fair to compare relative risk reductions than absolute risk reductions.  It's like a weight loss competition - if the guy who enters at a weight of 495 pounds loses 20 pounds, is he really a superior competitor to the guy who entered at a weight of 225 pounds and lost 15 pounds?  Insomuch as the competition for academic prowess resembles a weight loss contest, comparing relative reductions might be the way to go.
  4. If you're a public health type, and you want to convince somebody to engage in some behavior which on the population level has measurable benefits, you have to convince people on the individual level to engage in behaviors (motorcycle helmets, seat belts, statins, etc.) which only very slightly decrease their individual risk.  Relative risks are a way to do that, because they "appear" greater and more convincing than absolute risks.
The problem is that relative risks don't tell us how much benefit we get from a therapy or the value of an epidemiological finding - they only tell us that benefit in reference to a baseline which we may or may not know.  And if we need to know the baseline in order to determine the absolute value of the benefit (by mathematical deconstruction), then we should just report the absolute benefit in the first place.


  1. Many reports in the media about the benefits of treatments present risk results as relative risk reductions rather than absolute risk reductions. This often makes the treatments seem better than they actually are.

  2. Absolutely correct for healthcare decisions. I think. I didn't resonate with your shopping analogy, though. We NEED socks, and we have to buy them. If I only buy socks throughout my lifetime when they are 50% off, I think it's likely that I'll save the same $1,000, incrementally. On the other hand, an Audi A4 is a luxury that I don't necessarily need. The ability to save $1,000 now on something that puts me in debt for years doesn't balance out for me. I'll agree if you say that's a value judgement, rather than a mathematical one. However, so are most things we do for our health. The tiny absolute increments that we get from scores of little things we do - exercise, eat green, leafy vegetables, wear a seatbelt, get good sleep all likely add up to some small improvement in longevity and, if we're lucky, feeling better while we're doing it. Avoidance of the cigarettes or drugs that I actually don't need (possibly analagous to not buying the Audi, for those of us who abhor the debt) adds, as well. This has no bearing on the argument about which drug to use in any given disease, but is just something to think about.

    Similarly, while another argument of yours, regarding the normalization fallacy in ICU, holds some water, it does occur to me that the tiny increments we have made across a lot of fronts in the ICU (and the pre-ICU) - possibly including normalizing a potassium level - have resulted in an overall substantially higher survival rate in our ICUs. And that improved survival allows us to quit focusing solely on ICU survival and turn a lot of attention to what it means to survive the ICU, in terms of long term well being and to look for the incremental behaviors that can improve the long term, as well.

    So, I guess I'm saying that these small, incremental absolute risk reductions should not be dismissed out of hand. If they are expensive, maybe yes, but if they are essentially free, as in head of bed at 30 degrees, then the tiny absolute improvement is likely worth the investment. Additional thoughts welcome.

  3. Thanks for your comments. I will admit that I don't totally follow. One issue is whether absolute vs relative risks have primacy for decision making and I think it's clear that absolute ones are the ones that matter. Whether the decision is one of necessity versus luxury does not matter regarding the absolute/relative distinction. And yes, the luxury necessity distinction is one of values.

    I agree that things are additive, but that is also irrelevant to a decision. What matters is whether the "net utility" of each little thing you do is positive for you when you figure the positive value * its probability minus/less the costs of doing it is has net positive utility. If it is easy for you to avoid red meat, then surely do it. If it is a big struggle, you may carefully review the probability*value of avoiding red meat and conclude that it is so small that it is not worth the effort for you.

    We should have called it the normalization fallacy. I can apply the same logic there - I think the probability that replacing potassium helps people is very small and I opine that on balance the effort of doing it exceeds the positive value and we should not do it.

    I think you may wish to consider that incremental gains in ICU outcomes have to do with other things, such as less illness severity and cohort effects, and the fact that much/most of the improvement *may* be due to the fact that we have stopped doing harmful stuff in the past 20 years.

    Your may also be interested in the prevention/therapeutic paradox: http://www.medicalevidenceblog.com/2015/01/the-therapeutic-paradox-whats-right-for.html

    Thanks for your interest!