Had a great time yesterday at UCLA, where I was afforded the honor of being asked to do a lecture in the Jacob Marshack Interdisciplinary Colloquium on Mathematics and Behavioral Science. The audience asked lots of thoughtful questions. Plus I got the opportunity to learn lots of cool things (like how many atoms are in the Sun) from Susan Lohmann, Mark Kleiman, and others.
I believe they were filming and will upload a video of the event. If that happens, I’ll post the link. For now, here’s a summary (to best of my recollection) & slides.
1. The science communication problem & the cultural cognition thesis
I am going to offer a synthesis of a body of research findings generated over the course of a decade of collaborative research on public risk perceptions.
The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control.
In addition to being annoying (in particular, to scientists—who feel frustratingly ignored—but also to anyone who believes self-government and enlightened policymaking are compatible), the science communication problem is also quite peculiar. The factual questions involved are complex and technical, so maybe it should not surprise us that people disagree about them. But the beliefs about them are not randomly distributed. Rather they seem to come in familiar bundles (“earth not heating up . . . ‘concealed carry’ laws reduce crime”; “nuclear power dangerous . . . death penalty doesn’t deter murder”) that in turn are associated with the co-occurrence of various individual characteristics, including gender, race, region of residence and, ideology (but not really so much by income or education), that we identify with discrete cultural styles.
The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk.
2. A Model
A Bayesian model of information processing can be used heuristically to make sense of the distinctive features of any proposed cognitive mechanism. In the Bayesian model an individual exposed to new information revises the probability of her prior estimation of the probability of some proposition (expressed in odds) in proportion to the likelihood ratio associated with the new evidence (i.e., how much more consistent new evidence is with that proposition as opposed to some alternative).
A person experiences confirmation bias when she selectively searches out and credits new information conditional on its agreement with her existing beliefs. In effect, she is not updating her prior beliefs based on the weight of the new evidence; she is using her prior beliefs to determine what weight the new evidence should be assigned. Because of this endogeneity between priors and likelihood ratio, she will fail to correct a mistaken belief or fail to correct as quickly as she should despite the availability of evidence that conflicts with that belief.
The cultural cognition model posits that individuals have “cultural predispositions”—that is some tendency, shared with others who hold like group commitments, to find some risk claims more congenial than others. In relation to the Bayesian model, we can see cultural predispositions as the source of individuals’ priors. But cultural dispositions also shape information processing: people more readily search out (or are more likely to be exposed to) evidence congenial to their cultural predispositions than evidence noncongenial to them; they also selectively credit or discredit evidence conditional on its congeniality to their cultural predispositions.
Under this model, we will often see what looks like confirmation bias because the same thing that is causing individuals priors—cultural predispositions—is shaping their search for and evaluation of new evidence. But in fact, the correlation between priors and likelihood ration in this model is spurious.
The more consequential distinction between cultural cognition and confirmation bias is that with the latter people will not only be stubborn but disagreeable. People’s cultural predispositions are heterogeneous. As a result, people with different values with start with different priors, and thereafter engage in opposing forms of biased search for confirming evidence, and selectively credit and discredit evidence in opposing patterns reflective of their respective cultural commitments.
If this is how people behave, we will see the peculiar pattern of group conflict associated with the “science communication problem.”
3. Nanotechnology: culturally biased search & assimilation
CCP tested this model by studying the formation of nanotechnology risk perceptions. In the study, we found that individuals exposed to information on nanotechnology polarized relative to uninformed subjects along lines that reflected the environmental and technological risks associated with their cultural groups. We also found that the observed association between “familiarity” with nanotechnology and the perception that its benefits outweigh its risks was spurious: both the disposition to learn about nanotechnology before the study and the disposition to react favorably to information were caused by the (pro-technology) individualistic worldview.
This result fits the cultural cognition model. Cultural predispositions toward environmental and technological risks predicted how likely subjects of different outlooks were to search out information on a novel technology and the differential weight (the “likelihood ratio,” in Bayesian terms) they’d give to information conditional on being exposed to it.
4. Climate change
a. In one study, CCP found that cultural cognition shapes perceptions of scientific consensus. Experiment subjects were more likely to recognize a university trained scientist as an “expert” whose views were entitled to weight—on climate change, nuclear power, and gun control—if the scientist was depicted as holding the position that was predominant in the subjects’ cultural group. In effect, subjects were selectively crediting or discrediting (or modifying the likelihood ratio assgined to) evidence of what “expert scientists” believe on this topics in a manner congenial to their cultural outlooks. If this is how they react in the real world to evidence of what scientists believe, we should expect them to be culturally polarized on what scientific consensus is. And they are, we found in an observational component of the study. These results also cast doubt on the claim that the science communication problem reflects the unwillingness of one group to abide by scientific consensus, as well as any suggestion that one group is better than another at perceive what scientific consensus is on polarized issues.
b. In another study, CCP found that science comprehension magnifies cultural polarization. This is contrary to the common view that conflict over climate change is a consequence of bounded rationality. The dynamics of cultural cognition operate across both heuristic-driven “System 1” processing, as well as reflective, “System 2” processing. (The result has also been corroborated experimentally.)
5. The “tragedy of the science communications commons”
The science communication problem can be understood to involve a conflict between two levels of rationality. Because their personal behavior as consumers or voters is of no material consequence, idividuals don’t increase their own exposure to harm or that of anyone else when they make a “mistake” about climate science or like forms of evidence on societal risks. But they do face significant reputational and like costs if they form a view at odds with the one that predominates in their group. Accordingly, it is rational at the individual level for individuals to attend to information in a manner that reinforces their connection to their group. This is collectively irrational, however, for if everyone forms his or her perception of risk in this way, democratic policymaking is less likely to converge on policies that reflect the best available evidence.
The solution to this “tragedy of the science communication commons” is to neutralize the conflict between the formation of accurate beliefs and group-congenial ones. Information must be conveyed in ways—or conditions otherwise created—that avoid putting people to a choice between recognizing what’s known and being who they are.
You will want me to show you how to do that, and on climate change. But I won’t. Not because I can’t (see these 50 slides flashed in 15 seconds). Rather, the reason is that I know that there’s no risk that you’ll fail to ask me what I have to say about “fixing the climate change debate” if I don’t address that topic now, and that if I do the risk is high you’ll neglect to ask another question that I think is very important: how is it that this sort of conflict between recognizing what’s known and being who one is happen in the first place?
Such a conflict is pathological. It’s bad. And it’s not the norm: the number of issues on which the entanglement of positions with group-congenial meanings could happen relative to the number on which they do is huge. If we could identify the influences that cause this pathological state, we likely could figure out how to avoid it, at least some of the time.
The HPV vaccine is a good illustration. The HPV vaccine generated tremendous controversy because it became entangled in divisive meanings relating to gender roles and parental sovereignty versus collective mandates of medical treatment for children. But there was nothing necessary about this entanglement; the HBV vaccine is likewise aimed at a sexually transmitted disease, was placed on the universal childhood-vaccination schedule by the CDC, and now has coverage rates of 90-plus percent year in & year out. Why did the HPV vaccine not travel this route?
The answer was the marketing strategy followed by Merck, the manufacturer of the HPV vaccine Gardasil. Merck did two things that made it highly likely the vaccine would become entangled in conflicting cultural meanings: first, it decided to seek fast-track approval of the vaccine for girls only (only females face an established “serious disease” risk—cervical cancer—from HPV); and second, it orchestrated a nationwide campaign to press for adoption of mandatory vaccine policies at the state level. This predictably provoked conservative religious opposition, which in turn provoked partisan denunciation.
Neither decision was necessary. If the company hadn’t pressed for fast-track consideration, the vaccine world have been approved for males and females within 3 years (it took longer to get approval for males because of the resulting controversy after approval of the female-only version). In addition, with state mandates, universal coverage could have been obtained through commercial and government-subsidized insurance.
That outcome wouldn’t have been good for Merck, which wanted to lock up the US market before GlaxoSmithKline obtained approval for its HPV vaccine. But it would have been better for our society, because then instead of learning about the vaccine from squabbling partisans, they would have learned about it from their pediatricians, in the same way that they learn about the HBV vaccine.
The risk that Merck’s campaign would generate a political controversy that jeopardized acceptability of the vaccine was forecast in empirical studies. It was also foreseen by commentators as well as by many medical groups, which argued that mandatory vaccination policies were unnecessary.
The FDA and CDC ignored these concerns, not because they were “in Merck’s pocket” but because they were simply out of touch. They had not mechanism for assessing the impact that Merck’s strategy might have or for taking the risks this strategy was creating into account in determining whether, when, and under what circumstances to approve the vaccine.
This is a tragedy too. We have tremendous scientific intelligence at our disposal for promotion of the common welfare. But we put the value of it at risk because we have no national science-communication intelligence geared to warning us of, and steering us clear of, the influences that generate the disorienting fog of conflict that results when policy-relevant facts become entangled in antagonistic cultural meanings.
6. A “new political science”
Cultural cognition is not a bias; it is integral to rationality. Because individuals must inevitably accept as known by science many more things than they can comprehend, their well-being depends on their becoming reliably informed of what science knows. Cultural certification of what’s collectively known is what makes this possible.
In a pluralistic society, however, the sources of cultural certification are numerous and diverse. Normally they will converge; ways of life that fail to align their members with the best available evidence on how to live well will not persist. Nevertheless, accident and misadventure, compounded by strategic behavior, create the persistent risk of antagonistic meanings that impede such convergence—and thus the permanent risk that members of a pluralistic democratic society will fail to recognize the validity of scientific evidence essential to their common welfare.
This tension is built into the constitution of the Liberal Republic of Science. The logic of scientific discovery, Popper teaches us, depends on the open society. Yet the same conditions of liberal pluralism that energize scientific inquiry inevitably multiply the number of independent cultural certifiers that free people depend on to certify what is collectively known.
At the birth of modern democracy, Tocqueville famously called for a “new political science for a world itself quite new.”
The culturally diverse citizens of fully matured democracies face an unprecedented challenge, too, in the form of the science communication problem. To overcome it, they likewise are in need of a new political science—a science of science communication aimed at generating the knowledge they need to avoid the tragic conflict between converging on what is know by science and being who they are.