Often people say, “oh, you’re talking about confirmation bias!” when they hear about one of our cultural cognition studies. That’s wrong, actually.
Do I care? Not that much & not that often. But because the conflating of these two dynamics can actually interfere with insight, I’ll spell out the difference.
Start with a Bayesian model of information processing—not because it is how people do or (necessarily, always) should think but because it supplies concepts, and describes a set of mental operations, with reference to which we can readily identify and compare the distinctive features of cognitive dynamics of one sort or another.
Bayses’s Theorem supplies a logical algorithm for aggregating new information or evidence with one’s existing assessment of the probability of some proposition. It says, in effect, that one should update or revise one’s existing belief in proportion to how much more consistent the new evidence is with the proposition (or hypothesis) in question than it is with some alternative proposition (hypothesis).
Under one formalization, this procedure involves multiplying one’s “prior” estimate, expressed in odds that the proposition is true, by the likelihood ratio associated with the new information to form one’s revised estimate, expressed in odds, that the proposition is true. The “likelihood ratio”—how many times more consistent the new information is with the proposition in question—represents the weight to be assigned to the new evidence.
An individual displays confirmation bias when she selectively credits or discredits evidence based on its consistency with what she already believes. In relation to the Bayesian model, then, the distinctive feature of confirmation bias consists in an entanglement between a person’s prior estimate of a proposition and the likelihood ratio she assigns to new evidence: rather than updating her existing estimate based on the new evidence, she determines the weight of the new evidence based on her prior estimate. Depending on how strong the degree of this entanglement is, she’ll either never change her mind or won’t change it as quickly as she would have if she had been determining the weight of the evidence on some basis independent of her “priors.”
Cultural cognition posits that people with one or another set of values have predispositions to find particular propositions relating to various risks (or related facts) more congenial than other propositions. They thus selectively credit or discredit evidence in patterns congenial to those predispositions. Or in Bayesian terms, their cultural predispositions determine the likelihood ratio assigned to the new evidence. People not only will be resistant to changing their minds under these circumstances; they will also be prone to polarization—even when they evaluate the same evidence—because people’s cultural predispositions are heterogeneous.
See how that’s different from confirmation bias? Both involve conforming the weight or likelihood ratio of the evidence to something collateral to the probative force that that evidence actually has in relation to the proposition in question. But that collateral thing is different for the two dynamics: for confirmation bias, it’s what someone already believes; for cultural cognition, it’s his or her cultural predispositions.
But likely you can also now see why the two will indeed often look the “same.” If as a result of cultural cognition, someone has previously fit all of his assessments of evidence to his cultural predispositions, that person will have “priors” supporting the proposition he is predisposed to believe. Accordingly, when such a person encounters new information, that person will predictably assign the evidence a likelihood ratio that is consistent with his priors.
However, if cultural cognition is at work, the source of the entanglement between the individuals’ priors and the likelihood ratio that this person is assigning the evidence is not that his priors are influencing the weight (likelihood ratio) he assigns to the evidence. Rather it is that the same thing that caused that individual’s priors—his cultural predisposition—is what is causing that person’s biased determination of the weight the evidence is due. So we might want to call this “spurious confirmation bias.”
Does this matter? Like I said, not that much, not that often.
But here are three things you’ll miss if you ignore everything I just said.
1. If you just go around attributing everything that is a consequence of cultural cognition to confirmation bias, you will not actually know—or at least not be conveying any information about—who sees what and why. A curious person observes a persistent conflict over some risk—like, say, climate change; she asks you to explain why that group sees things one way that another. If you say, “because they disagree, and as a result construe the evidence in a way that supports what they already believe,” she is obviously going to be unsatisfied: all you’ve done is redescribe the phenomenon she just asked you to explain. If you can identify the source of the bias in a person’s cultural predisposition, you’ll be able to give this curious questioner an account of why the groups found their preferred beliefs congenial to begin with—and also who the different people in these groups are independently of what they already believe about the risk in question.
2. If you reduce cultural cognition to confirmation bias, you won’t have a basis for predicting or explaining polarization in response to a novel risk. Before people have encountered and thought about a new technology, they are unlikely to have views about it one way or the other, and any beliefs they do have are likely to be noisy—that is, uncorrelated with anything in particular. If, however, people have cultural predispositions on risks of a certain type, then we can predict such people will, when they encounter new information about this technology, assign opposing likelihood ratios to it and end up polarized!
CCP did exactly that in a study of nanotechnology. In it, we divided subjects who were largely unfamiliar with nanotechnology into two groups, one of whom was supplied no information other than a very spare definition and another of whom was supplied balanced information on nanotechnology risks and benefits. Hierarchical individualists and egalitarian communitarians in the “no information” group had essentially identical views of the risks and benefits of nanotechnology. But those who were supplied with balanced information polarized along lines consistent with their predispositions toward environmental and technological risks generally.
“Confirmation bias” wouldn’t have predicted that; it wouldn’t have predicted anything at all.
3. Finally and likely most important, if you stop understanding what the causal mechanisms are at the point at which cultural cognition looks like confirmation bias, you won’t be able to formulate any hypotheses about remedies.
Again, confirmation bias describes what’s happening—people are fitting their assessment of evidence to what they already believe. From that, nothing in particular follows about what to do if one wants to promote open-minded engagement with information that challenges peoples’ existing perceptions of risk.
Cultural cognition, in contrast, explains why what’s happening is happening: people are motivated to fit assessments of evidence to their predispositions. Based on that explanation, it is possible to specify what’s needed to counteract the bias: ways of presenting information or otherwise creating conditions that erase the antagonism between individuals’ cultural predispositions and their open-minded evaluation of information at odds with their priors.
CCP has done experimental studies showing how to do that. One of these involved the use of culturally identifiable experts, whose credibility with lay people who shared their values furnished a cue that promoted open-minded engagement with information, and hence a revision of beliefs about, the risks of the HPV vaccine.
In another, we looked at how to overcome bias on climate change evidence. We surmised that the positive that individuals culturally predisposed to dismiss evidence of climate change engaged that information more open-mindedly when they learned that geoengineering and not just carbon-emission limits were among the potential remedies. The cultural resonances of geoengineering as a form of technological innovation might help to offset in hierarchical individualists (the people who really like nanotechnology when they learn about it) the identity-threatening resonances associated with climate change evidence, the acceptance of which is ordinarily understood to require limiting technology, markets and industry. Our finding corroborated that surmise: individuals who learned about geoengineering responded more open-mindedly to evidence on the risks of climate change than those who first learned only about the value of carbon-emission limits.
Nothing in the concept of “confirmation bias” predicts effects like these, either, and that means it’s less helpful than an explanation like cultural cognition if we are trying to figure out what to do to solve the science communication problem.
Does this mean that I or you or anyone else should get agitated when people conflate cultural cognition and confirmation bias?
Nope. It means only that if there’s reason to think that the conflation will prevent the person who makes it from learning something that we think he or she would value understanding, then we should help that individual to see the difference with an explanation akin to the one I have just offered.
Some references
Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias. The Quarterly Journal of Economics 114, 37-82 (1999).
Kahan, D.M. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).
Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).
Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).