I had the privilege of being part of a panel discussion last Fri. at the great “Scienceonline Climate” conference in Wash. D.C. The other panel members were Tom Armstrong, Director of National Coordination for the U.S. Global Change Research in the Office of Science and Technology Policy; and Michael Mann, Distinguished Professor of Meteorology & Director, Earth System Science Center at Penn State Universitly; Author on the Observed Climate Variability and Change chapter of the Intergovernmental Panel on Climate Change (IPCC) Third Scientific Assessment Report in 2001; organizing committee chair for the National Academy of Sciences Frontiers of Science in 2003; and contributing scientist to the 2007 Nobel Peace Prize awarded to the IPCC. Pretty cool!
Topic was “Credibility, Trust, Goodwill, and Persuasion.” Moderator Liz Neely (who expended most of her energy skillfully moderating the length of my answers to questions) framed the discussion around the recent blogosphere conflagration ignited by Tamsin Edwards’ column in Guardian.
Edwards seemed to pin the blame for persistent public controversy over what’s known about climate change on climate scientist’s themselves, arguing that “advocacy by climate scientists has damaged trust in the science.”
Naturally, her comments provoked a barrage of counterarguments from climate scientists and others, many of whom argued that climate scientists are uniquely situated to guide public deliberations into alignment with the best available scientific evidence.
All very interesting!
But I have a different take from those on both sides.
Indeed, the take is sufficiently removed from what both seem to assume about how scientists’ position-taking influences public beliefs about climate change and other issues that I really just want to put that whole debate aside.
Instead I’ll rehearse the points I tried to inject into the panel discussion (slides here).
If I can manage to get those points across, I think it won’t really be necessary, even, for me to say what I think about the contending claims about the role of “scientist advocacy” in the climate debate. That’ll be clear enough.
Those points reduce to three:
1. Members of the public do trust scientists.
2. Members of culturally opposing groups distrust each other when they perceive their status is at risk in debates over public policy.
3. When facts become entangled in cultural status conflicts, members of opposing groups (all of whom do trust scientists) will form divergent perceptions of what scientists believe.
To make out these three points, I focused on two CCP studies, and an indisputable but tremendously important and easily ignored fact.
The first study examined “who believes what and why” about the HPV vaccine. In it we found that members of the cultural groups who are most polarized on the risks and benefits of the HPV vaccine both treat the positions of public health experts as the most decisive factor.
Members of both groups have predispositions—ones that both shape their existing beliefs and motivate them to credit and discredit evidence in selectively in patterns that amplify polarization when they are exposed to information.
But members of both groups trust public health experts to identify what sorts of treatments are best for their children. They will thus completely change their positions if a trusted public health expert is identified as the source of evidence contrary to their cultural predispositions.
Of course, members of the public tend to trust experts whose cultural values they share. Accordingly, if they are presented with multiple putative experts of opposing cultural values, then they will identify the one whom they (tacitly!) perceive has values closest to their own as the real experts—the one who really knows what he’s talking about and can be trusted—and do what he (we used only white males in the study to avoid any confounds relating to race and gender) says.
There is only one circumstance in which these dynamics produce polarization: when members of the public form the perception that the position they are culturally predisposed to accept is being uniformly advanced by experts whose values they share and positions they are culturally predisposed to reject are being uniformly advanced by experts whose values they reject.
That was the one we got in the real world…
The second study examined “cultural cognition of scientific consensus.” In that one, we examined how individuals identify expert scientists on culturally charged issues—viz., climate change, gun control, and nuclear waste disposal.
We found that when shown a single scientist with credentials that conventionally denote expertise —a PhD from a recognized major university, a position on the faculty of such a university, and membership in the National Academy of Sciences—individuals readily identified that scientist as an “expert” on the issue in question.
But only if that scientist was depicted as endorsing the position that predominates among members of the subjects’ own cultural group. Otherwise, subjects dismissed the scientists’ views on the ground that he was not a genuine “expert” on the topic in question.
We offered the experiment as a model of how people process information about what “expert consensus” is in the real world. When presented with information that is probative of what experts believe, people have to decide what significance to give it. If, like the vast majority of our subjects, they credit evidence that is genuinely probative of expert opinion only when that evidence (including the position of a scientist with relevant credentials) matches the position that predominates in their cultural group, they will end up culturally polarized on what expert consensus is.
Our study found that to be the case too. On all three of the risk issues in question—climate change, nuclear waste disposal, and laws allowing citizens to carry concealed hand guns—the members of our nationally representative sample all believed that “scientific consensus” was consistent with the position that predominates in their cultural group. They were all correct, too—1/3 of the time, at least if we use National Academy of Science expert consensus reports as our benchmark of what “expert consensus” is.
So–
These studies, I submit, support points (1)-(3).
No group’s members understand themselves to be taking positions contrary to what expert scientists advocate. They all believe that the position that predominates in their group is consistent with the views of expert scientists on the risks in question.
In other words, they recognize that science is a source of valid knowledge that they otherwise couldn’t obtain by their own devices, and that in fact one would have to be a real idiot to say, “Screw the scientists—I know what the truth is on climate, nuclear power, gun control, HPV vaccine etc & they don’t!”
That’s the way members of the public are. Some people aren’t like that in our society—they don’t trust what scientists say on these kinds of issues. But they are really a teeny tiny minority (ordinary members of the public on both sides of these issues would regard them as oddballs, whack jobs, wing nuts, etc).
The tiny fraction of the population who “don’t trust scientists” aren’t playing any significant role in generating public conflict on climate or any of these other issues.
The reason we have these conflicts is because positions on these issues have become symbols of membership in, and loyalty to, the groups in question.
Citizens have become convinced that people with values different from theirs are using claims about danger and risk to advance policies that intended to denigrate their way of life and make them the objects of contempt and ridicule. As a result, these debates are pervaded by the distrust that citizens of opposing values have for one another when they perceive that a policy issue is a contest over the status of contending cultural groups.
When that happens, individuals don’t stop trusting scientists. Rather, as a result of cultural cognition and like forms of motivated reasoning, they (all of them!) unconsciously conform the evidence of “what expert scientists believe” to their stake in protecting the status of their group and their own standing within it.
That pressure, moreover, doesn’t reliably lead them to the truth. Indeed, it makes it inevitable that individuals of diverse outlooks will all suffer because of the barrier it creates betweeen democratic deliberations and the best available scientific evidence.
As I indicated, I also relied on a very obvious but tremendously important and easily ignored fact: that this sort of entanglement of “what scientists believe” and cultural status conflict is not normal.
It is pathological, both in the sense of being bad and being rare.
The number of consequential insights from decision-relevant science that generate cultural conflict is tiny—miniscule—relative to the number that don’t. There’s no meaningful cultural conflict over pasteurization of milk, high-power transmission lines, flouridation of water, cancer from cell phones (yes, some people in little enclaves are arguing about this—they get news coverage precisely because the media knows viewers in most parts of the country will find the protestors exotic, like strange species in zoo) or even the regulation of emissions from formaldehyde, etc etc etc etc.
Moreover, there’s nothing about any particular issue that makes cultural conflict about “necessary” or “inevitable.” Indeed, some of the ones I listed are sources of real cultural conflict in Europe; all they have to do is look over here to see that things could have been otherwise.
And all we have to do is look around to see that things could have been otherwise for some of the issues that we are culturally divided on.
The HBV vaccine—the one that immunizes children against Hepatitis b—is no different in any material respect from the HPV vaccine. Like the HPV vaccine, the HBV vaccine protects people from a sexually transmitted disease. Like the HPV vaccine, it has been identified by the CDC as appropriate for inclusion in the schedule of universal childhood vaccinations. But unlike the HPV vaccine there is no controversy—cultural or otherwise—surrounding the HBV vaccine. It is on the list of “mandatory” vaccinations that are a condition of school enrollment in the vast majority of states; vaccinate rates are consistently above 90% (they are less than 30% in the target population for HPV) – and were so every year (2007-2011) in which proposals to make the HPV vaccine mandatory was a matter of intense controversy throughout the U.S.
The introduction of subsequent career of the HBV vaccine has been, thankfully, free of the distrust that culturally diverse groups experience toward each other when they are trying to make sense of what the scientific evidence is on the HPV vaccine. Accordingly, members of those groups, all of whom trust scientists, are able reliably to see what the weight of scientific opinion is on that question.
So want to fix the science communication problem?
Then for sure deal with the trust issue!
But not the nonexistent one that supposedly exists between scientists and the public.
The real one–between opposing cultural groups locked in needless, mindless, illiberal forms of status conflict that disable the rational faculties that ordinary citizens of all cultural outlooks ordinarily and reliably use to recognize what is known to science.