Monday, June 26, 2023

Anchored on Anchoring: A Concept Cut from Whole Cloth


Welcome back to the blog. An article published today in JAMA Internal Medicine was just the impetus I needed to return after more than a year.

Hardly a student of medicine who has trained in the past 10 years has not heard of "anchoring bias" or anchoring on a diagnosis. What is this anchoring? Customarily in cognitive psychology, to demonstrate a bias empirically, you design an experiment that shows directional bias in some response, often by introducing an irrelevant (independent) variable e.g., a reference frame, as we did here and here. Alternatively, you could show bias if responses deviate from some known truth value as we did hereWhat does not pass muster is to simply say "I think there is a bias whereby..." and write an essay about it.

That is what happened 20 years ago when an expository essay by Crosskerry proposed "anchoring" as a bias in medical decision making, which he ostensibly named after the "anchoring and adjustment" heuristic demonstrated by Kahneman and Tversky (K&T) in  experiments published in their landmark 1974 Science paper. The contrast between "anchoring to a diagnosis" (A2D) and K&T's anchoring and adjustment (A&A) makes it clear why I bridle so much at the former. 

To wit: First, K&T showed A&A via an experiment with an independent (and irrelevant) variable. They had participants in this experiment spin a dial on a wheel with associated numbers, like on the Wheel of Fortune game show. (They did not know that the dial was rigged to land on either 10 or 65.) They were then asked whether the number of African countries that are members of the United Nations was more or less than that number; and then to give their estimate of the number of member countries. The numerical anchors, 10 and 65, biased responses. For the group of participants whose dials landed on 10, their estimates were lower, and for the other group (65), they were higher. 

There are many nuances, conditions, and explanations of this effect. Here I want to focus on three things. First, the bias has been demonstrated experimentally (and has been repeated and replicated over and over). Second, both the anchor (independent variable) and the response (dependent variable) are numerical, not categorical. Third, the elegance of the experiment lies in the fact that the anchor is so obviously irrelevant to the response - there is no informational value to the anchor numbers in terms of the questions asked, and this should have been blatantly obvious to participants.

In contrast, A2D is not empirically demonstrated, it is the brainchild of Crosskerry in his 2002 essay; it cannot be experimentally demonstrated, because diagnosis is categorical, not numerical. (There are 3 studies in the 1980s about rating severity of symptoms of mental illness on a continuum; I do not address them here. See Ellis et al. 1990; Richards and Wierzbicki 1990;) But perhaps I'm being too punctilious - maybe it doesn't matter that Crosskerry named it after K&T's A&A, or that it has not been experimentally demonstrated in the customary fashion; maybe it's nonetheless a thing that happens (call it whatever you want), and it's useful to consider it to avert error.

Well and good. We will stop making references to K&T when we talk about A2D. But, to be scientifically validated we must be able to measure it, and to measure it, we need a solid definition. According to Crosskerry, 

...anchoring is the tendency to fixate on specific features of a presentation too early in the diagnostic process, and to base the likelihood of a particular event on information available at the outset (i.e., the first impression gained on first exposure, the initial approximate judgment)... 

This definition is vague and unsatisfactory, akin to Freud's id and superego, but it's all we've got so we'll go with it for now. We do have experimental data against anchoring if it is as Crosskerry conceived it. The order of symptom presentation did not have an effect on the diagnosis that is selected (click linked article). In another study of the order of information presentation by seasoned, card-carrying decision researchers found a modest effect, but they did not call this anchoring; rather the primacy effect that has been well documented for decades. In another study by Arthur Elstein, Gretchen Chapman and colleagues, physicians gave less weight to the past medical history when it was presented earlier - a reverse anchor as it were. Thus, there is evidence to bear on Crosskerry's conception of A2D, but it is conflicting and unconvincing. Furthermore, dyed-in-the-wool cognitive psychologists did much of this work years before Crosskerry's article; they were familiar with K&Ts A&A, but did not invoke anchoring in studies of information presentation order. Crosskerry's definition of anchoring appears to be founded upon shaky ground.

The other references for A2D in the article leading this post include: A book by Crosskerry (that is too expensive for my tastes); an internet link to the Patient Safety Network; and a systematic review of cognitive biases by Saposkik et al. The latter references three original articles on A2D in Figure 4 of the article: Reference 39 by Ogdie et al which is a 2012 publication from U Penn called "Seen through their eyes: residents' reflections on the cognitive and contextual components of diagnostic errors in medicine". In this article, residents were asked to recall diagnostic errors and attribute them to a type of bias - hardly a substantiation of A2D, rather a petitio principii: it exists because the residents think it exists! Reference 50 is an anesthesia essay/review with no empirical data supporting A2D. Finally, reference 52 is a paper purporting to show A2D in pathologists utilizing an automated computer based detection system. They state: 

Our system determined that participants anchored if they entered one or more hypotheses before entering one or more findings. If the added finding supported the reported diagnostic hypothesis or if the participant, after reporting the finding, added one or more diagnoses supported by the findings, our system determined that they correctly used the anchoring heuristic. If, however, the participant did not adjust their initial hypothesis in light of having reported evidence (i.e., findings) that did not support it, we determined that the participant was subject to the anchoring bias.

Interestingly, the authors found that what they called anchoring was evenly split in terms of causing bias or leading to the correct answer. Whether their conception of A2D comports with Crosskerry's initial proposal is open to question, and I will not further belabor it here.

The available literature then does little to support the existence of an A2D bias, nor does it appear to offer guidance for physicians in how to avoid the errors it purportedly engenders. 

What, then, is this phenomenon that so resonates with physicians and whose popularity has soared in the past 15 years? I propose that what we call anchoring is simply indolent, undisciplined, half-hearted stabs at diagnosis via pattern recognition, compounded by confirmation bias. 50 years ago, we would have simply called this failure to formulate a differential diagnosis and bring evidence to bear upon it. Yesteryear's laziness is today's "anchoring".

I will have more to say about the index article, purporting to show anchoring in the Veteran's Affairs emergency departments, at a later date. For now, suffice it to say that it is extremely disappointing how uncritically this concept has been accepted and promoted, how thin is the gruel in the bibliographic references. A cynical observer might say that we are anchored on anchoring, having swallowed it fluke, arm, and shaft.

1 comment:

  1. A brilliant post-publication peer review comment was published on the JAMA website regarding this article by Sam Campbell, MB BCh, CCFP(EM), FCFP, D | Dalhousie University

    https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2806464

    ReplyDelete

Note: Only a member of this blog may post a comment.