I was on a panel Saturday on “public policy and science” at the CSICon conference in Nashville. My friend Chris Mooney was on it, too. I didn’t speak from a text, but this is pretty close to what I rember saying; slides here.
I’m going to discuss the “science communication problem” – the failure of sound, widely disseminated science to settle public controversies over risks and other policy-relevant facts that admit of scientific investigation.
What makes this problem perplexing isn’t that we have no sensible explanation it. Rather it’s that we have too many.
There are always more plausible accounts of social phenomena than are actually true. Empirical observation and meansurement are necessary–not just to enlarge collective knowledge but also to steer people away from deadends as they search for effective solutions to the society’s problems.
In this evidence-based spirit, I’ll identify what I regard as one good explanation for the science communication problem and four plausible but not so good ones. Then I’ll identify a “fitting solution”—that is, a solution that fits the evidence that makes the good explanation better than the others.
One good explanation: identity-protective cognition
Identity-protective cognition (a species of motivated reasoning) reflects the tendency of individuals to form perceptions of fact that promote their connection to, and standing in, important groups.
There are lots of instances of this. Consider sports fans who genuinely see contentious officiating calls as correct or incorrect depending on whether those calls go for or against their favorite team.
The cultural cognition thesis posits that many contested issues of risk—from climate change to nuclear power, from gun control to the HPV vaccine—involve this same dynamic. The “teams,” in this setting, are the groups that subscribe to one or another of the cultural worldviews associated with “hierarchy-egalitarianism” and “individualism-communitarianism.”
CCP has performed many studies to test this hypothesis. In one, we examined perceptions of scientific consensus. Like fans who see the disputed calls of a referree as correct depending on whether they favor their team or its opponent, the subjects in our study perceived scientists as credible experts depending on whether the scientists’conclusions supported the position favored by members of the subjects’ cultural group or the one favored by the members of a rival one on climate change, nuclear power, and gun control.
Not very good explanation # 1: Science denialism
“Science denialism” posits that we see disputes over risks in the US because there is a significant portion of the populatin that doesn’t accept that the authority of science as a giude for policymaking.
The same study of the cultural cognition of scientific consenesus suggests that this isn’t so. No cultural group favors policies that diverge from scientific consensus on climate change, nuclear power, or gun control. But as a result of idenity-protective cognitoin, they are culturally polarized over what the scientific consensus is on those issues.
Moreover, no group is any better at discerning what scientific consensus is than any other. Ones that seem to have it right, e.g., on climate change are the most likely to get it wrong on deep geologic isolation of nuclear wastes, and vice versa.
Not very good explanation #2: Misinformation
I certainly don’t dispute that there’s a lot of misinformation out there. But I do question whether it’s causing public controversy over policy-relevant science. Indeed, causation likely runs the other way.
Again, consider our scientific consensus study. If the sort of “biased sampling” we observed in our subjects is typical of the way people outside the lab assess evidence on culturally contested issues, there won’t be any need to mislead them: they’ll systematiclly misinform themselves on the state of scientific opinion.
Still, we can be sure they’ll very much appreciate the efforts of anyone who is willing to help them out. Thus, their motivation to find evidence supportive of erroneous but culturally congenial beliefs will spawn a cadre of misinformers, who will garner esteem and profit rather than ridicule for misrepresenting what’s known to science.
The “misinformation thesis” has got things upsidedown.
Not very good explanation #3: “Bounded rationality”
Some people blame controversy over policy-relevant science on deficits in the public’s reasoning capacities. Ordinary members of the public, on this view, know too little science and can’t understand it anyway because they use error-prone, heuristic stratetgies for interpsteing risk information.
Plausible, sure. But wrong, it turns out, as an explantion for the science communication problem: higher levels of science literacy and quantiative reasoning ability, a CCP study found, don’t quiet cultural polarization on issues like climate change and nuclear power; they magnify it.
Makes sense given identity-protective cognition. People who are motivated to form perceptions that fit their cultural identities can be expected to use their greater knowledge and technical reasoning facility to help accomplish that—even if generates erroneous beliefs about societal risks.
Not very good explanation #4: Authoritarian personality
The original authoritarian-personality of Adorno and his colleagues is often dismissed as an exercise in polemics disguised as social science.
But in recent years, a serious body of scholarship has emerged on correlations between dogmatism, closed-mindedness, and like personality traits, on the one hand, and conservative ideology, on the other. This work is insigthfully synthesized in Mooney’s The Republic Brain.
Does this revitalized “authoritarian personality” position explain public controversy over policy-relevant science?
It’s odd to think it does, given the role that identity-protective cognition plays in such controversies. Identity-protective cognition affects all types of perception (not just evaluations of evidence but brute sense impressions) relating to all manner of group affinities (not just politics but college sports-team allegiances). So why would the impact of identity-protective cognition be linked to a personality trait found in political conservatives?
But the point is, we should just test things – with valid study designs. Is the score on an “open mindendess” test a valid predictor of the sort of identity-protective reasoning that generates disputes over climate change, the HPV vaccine, nuclear power, guns?
I did a study recently designed to answer to this question. I examined whether liberal Democrats and conservative Republicans would displayed identity-protective cogntion in assessing evidence of the validity of the Cognitive Reflection Test (CRT)—which is in fact a valid measure of reflective, open-minded engagement with information.
They both did, and to the same degree. When told that climate-skeptics got a higher CRT score (and here were presumably more open-minded), liberal Democrats were much less likely to view the test as valid than when they were told that climate-believes got a higher score (indicating they were more open-minded). The mirror-image pattern emerged for conservative Republicans.
What’s more, this effect was magnified by the disposition measured by CRT. That is, the subjects most inclined to employ conscious, reflective reasoning were the most prone to identity-protective cogniton—a result consistent with our findings in the Nature Climate Change study.
The new “authoritarian personality” work might be identifying real differences between liberals and conservatives. But there’s little reason to think that what it’s telling us about them has any connection to identity-protective cognition—the dynamic that has been shown with direct evidence to play a significant role in the science communication problem.
A fitting solution: The separation of meaning and fact
Identity-protetive cognition is the problem. It affects liberals and conservatives, interferes with the judgment of even the most scientifically literate and reflective citizens, and feeds off even sound information as it creates an appetite for bad.
We need a solution, then, fitted to counteracting it. The one I propose is the formation of “science communication environment” protection capacity in our society.
Policy-consequential facts don’t inevitably become the source of cultural conflict. Indeed, they do only in the rare cases where they become suffosed with highly charged and antagonistic cultural meanings.
These meanings are a kind of pollution in the science communication environment, one that interferes with the usually reliable faculty ordinary people employ to figure out who knows that about what.
The sources of such pollution are myriad. Strategic behavior is one. But simple miscalculation and misadventure also play a huge role.
The well-being of a democratic society requires protecting the science communication environment from toxic meanings. We thus need to use our knowledege to understanding how such meanings are formed. And we need to devote our political resolve to developing procedures and norms that counteract the forms of behavior—intentional and inadvertent—that generate this form of pollution.
A wall of separation between cultural meaning and scientific fact is integral to the constitution of the Liberal Republic of Science.