More or less what I said at really great NSF-sponsored “trust” workshop at University of Nebraska this weekend. Slides here.
1. What public distrust of science?
I want to address the relationship of trust to the science communication problem.
As I use the term, “the science communication problem” refers to the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent cultural conflict over risks or other policy-relevant facts to which that evidence directly speaks.
The climate change debate is the most spectacular current example, but it is not the only instance of the science communication problem. Historically, public controversy over the safety of nuclear power fit this description. Another contemporary example is the political dispute over the risks and benefits of the HPV vaccine.
Distrust of science is a common explanation for the science communication problem. The authority of science, it is asserted, is in decline, particularly among individuals of a relatively “conservative” political outlook.
This is an empirical claim. What evidence is there for believing that the public trusts scientists or scientific knowledge less today than it once did?
The NSF, which is sponsoring this very informative conference, has been compiling evidence on public attitudes toward science for quite some time as part of its annual Science Indicators series.
One measure of how the pubic regards science is its expressed support for federal funding of scientific research. In 1985, the public supported federal science funding by a margin of about 80% to 20%. Today the margin in the same—as it was at every point between then and now.
Back in 1981, the proportion of the public who thought that the government was spending too little to support scientific research outnumbered the proportion who thought that the government was spending too much by a margin of 3:2.
Today around four times as many people say the government is spending too little on scientific research than say it is spending too much.
Yes, there is mounting congressional resistance to funding science in the U.S.–but that’s not because of any creeping “anti-science” sensibility in the U.S. public.
Still aren’t sure about that?
Well, how would you feel if your child told you he or she was marrying a scientist? About 70% of the public in 1983 said that would make them happy. The proportion who said that grew to 80% by 2001, and grew another 5% or so in the last decade.
Are “scientists … helping to solve challenging problems”? Are they “dedicated people who work for the good of humanity”?
About 90% of Americans say yes.
Do you think you can squeeze the 75% of Republicans who say they “don’t believe in human-caused climate change” from the remainder? Better double check your math.
In sum, there isn’t any evidence that creeping distrust in science explains the science communication problem, because there’s no evidence either that Americans don’t trust scientists or that fewer of them trust them now than in the past.
Of course, if you like, you can treat the science communication problem itself as proof of such distrust. Necessarily, you might say, the public distrusts scientists if members of the public are in conflict over matters on which scientists aren’t.
But then the “public distrust in science” explanation becomes analytic rather than empirical. It becomes, in other words, not an explanation for the science communication problem but a restatement of it.
If we want to identify the source of the science communication problem, simply defining the problem as a form of “public distrust” in science—on top of being a weird thing to do, given the abundant evidence that the American public reveres science and scientists—necessarily fails to tell us what we are interested in figuring out, and confuses a lot of people who want to make things better.
2. The impact of cultural distrust on perceptions of what scientists believe
So rather than define the science communication problem as evincing “public distrust in science,” I’m going to offer an evidence-based assessment of its cause.
A premise of this explanation, in fact, is that the public does trust science.
As reflected in the sorts of attitudinal items in the NSF indicators and other sources, members of the public in the U.S. overwhelmingly recognize the authority of science and agree that the individual and collective decisionmaking should be informed by the best available scientific evidence.
But diverse members of the public, I’ll argue, distrust one another when they perceive that the status of the cultural groups they belong to are being adjudicated by the state’s adoption of a policy or law premised on a disputed risk or comparable fact.
When risks and other facts that admit of scientific investigation become the focus of cultural status competition, members of opposing groups will be unconsciously motivated to construe all manner of evidence in a manner that reinforces their commitment to the positions that predominate within their respective groups.
One source of evidence—indeed, the most important one—will be the weight of opinion among expert scientists.
As a result, culturally diverse people, all of whom trust scientists but who disagree with one another’s intentions on policy issues that have come to symbolize clashing worldviews, will end up culturally polarized over what scientists believe about the factual presuppositions of each other’s position.
That is the science communication problem.
I will present evidence from two (NSF-funded!) studies that support this account.
3. Cultural cognition of scientific consensus
The first was an experiment on how cultural cognition influences perceptions of scientific consensus on climate change, nuclear waste disposal, and the effect of “concealed carry” laws.
The cultural cognition thesis holds that individuals can be expected to form perceptions of risk and like facts that reflect and reinforce their commitment to identity-defining affinity groups.
For the most part, Individuals have a bigger stake in forming identity-congruent beliefs on societal risks than they have in forming best-evidence-congruent ones. If a person makes a mistake about the best evidence on climate change, for example, that won’t affect the risk that that individual or anyone he or she cares about faces: as a solitary individual, that person’s behavior (as consumer, voter, etc.) is too inconsequential to have an impact.
But if that person makes a “mistake” in relation to the view that dominates in his or her affinity group, the consequences could be quite dire indeed. Given what climate change beliefs now signify about one’s group membership and loyalties, someone who forms a culturally non-conformity view risks estrangement from those on whose good opinion that person’s welfare—material and emotional—depends.
It is perfectly rational, in these circumstances, for individuals to engage information in a manner that reliably connects their beliefs to their cultural identities than to the best scientific evidence. Indeed, experimental evidence suggests that the more proficient that person’s critical reasoning capacities, the more successful he or she will be in fitting all manner of evidence to the position that expresses his or her group identity.
What most scientists in a particular field believe is one such form of evidence. So we hypothesized that culturally diverse individuals would construe evidence of what experts believe in a biased fashion supportive of the position that predominates in their respective groups.
In the experiment, we showed study subjects the pictures and resumes of three highly credentialed scientists and asked whether they were “experts” (as one could reasonably have inferred from their training and academic posts) in the domains of climate change, nuclear power, and gun control.
Half the subjects were shown a book expert in which the featured scientist took the “high risk” position on the relevant issue (“scientific consensus that humans are causing climate change”; “deep geologic isolation of nuclear wastes is extremely hazardous”; “permitting citizens to carry concealed guns in public increases crime”), and half a book excerpt in which the same scientist too the “low risk” position (“evidence on climate change inconclusive”; “deep geologic isolation of nuclear wastes poses no serious hazards”; “allowing citizens to carry concealed guns reduces crime”).
If the featured scientist’s view matched the one dominant in a subject’s cultural group, the subject was highly likely to deem that scientists an “expert” whose views a reasonable citizen would take into account.
But if that same scientist was depicted as taking the position contrary to the one that was dominant in the subject’s group, then she was highly likely to perceive that the scientist lacked expertise on the subject in question.
This result was consistent with our hypotheses.
If individuals in the real-world selectively credit or discredit evidence on “what experts believe” in this manner, then individuals of diverse cultural outlooks will end up polarized on what scientific consensus is.
And this is exactly the case. In an observational component of the study, we found that the vast majority of subjects perceived “scientific consensus” to be consistent with the position that was dominant among members of their respective cultural groups.
Judged in relation to National Academy of Sciences “expert consensus” reports, moreover, all of the opposing cultural groups turned out to be equally bad in discerning what the weight of scientific opinion was across these three issues.
In sum, they all agreed that policy should be informed by the weight of expert scientific opinion.
But because policies in question turned on disputed facts symbolically associated with membership in opposing groups, they were motivated by identity-protective cognition to assess evidence of what scientists believe in a biased fashion.
4. The cultural credibility heuristic
The second study involved perceptions of the risks and benefits of the HPV vaccine.
The CDC’s 2006 recommendation that the vaccine be added to the schedule of immunizations required as a condition of middle school enrollment, although only for girls, provoked intense political controversy across the U.S. in the years immediately thereafter.
In our study, we found that there was very mild cultural polarization on the safety of the HPV vaccine among subjects’ whose views were solicited in a survey.
The degree of cultural polarization was substantially more pronounced, however, among subjects who were first supplied with balanced information on the vaccines’ potential risks and expected benefits. Consistent with the cultural cognition thesis, the subjects were selectively crediting and discrediting the information we supplied in patterns that reflected their stake in forming identity-supportive beliefs.
But still another group of subjects assessed the risks and benefits of the HPV vaccine after being furnished the same information from debating “public health experts.” These “experts” were ones whose appearances and backgrounds, a separate pretest had shown, would induce study subjects to competing cultural identities to them.
In this experiment condition, subjects’ assessments of the risks and benefits of the HPV vaccine turned decisively on the degrees of affinity between the perceived cultural identities of the experts and the study subjects’ own identities.
If subjects observed the position that they were culturally predisposed to accept being advanced by the “expert” they were likely to perceive as having values akin to theirs, and the position they were predisposed to reject being advanced by the “expert” they were likely to perceive as having values alien to their own, then polarization was amplified all the more.
But where subjects saw the expert they were likely to perceive as sharing their values advancing the position they were predisposed to reject, and the expert they were likely to perceive as holding alien values advancing the position they were predisposed to accept, subjects of diverse cultural identities flipped positions entirely. The subjects, then, trusted the scientific experts.
Indeed, polarization disappeared when experts whom culturally diverse subjects trusted told them the position they were predisposed to accept was wrong.
But the subjects remained predisposed to construe information in a manner protective of their cultural identities.
As a result, when they were furnished tacit cues that opposing positions on the HPV vaccine risks corresponded to membership in competing cultural groups, they credited the expert whose values they tacitly perceived as closest to their own—a result that intensified polarization when subjects’ predispositions were reinforced by those cues.
5. A prescription
The practical upshot of these studies is straightforward.
To translate public trust in science into convergence on science-informed policy, it is essential to protect decision-relevant science from entanglement in culturally antagonistic meanings.
No risk issue is necessarily constrained to take on such meanings.
There was nothing inevitable, for example, about the HPV vaccine becoming a focus of cultural status conflict. It could easily, instead, have been assimilated uneventfully into public health practice in the same manner as the HBV vaccine. Like the HPV vaccine, the HBV vaccine immunizes recipients against a sexually transmitted disease (hepatitis-b), was recommended for universal adolescent vaccination by the CDC, and thereafter was added to the school-enrollment schedules of nearly every state.
The HBV vaccine had uptake rates of over 90% during the years in which the safety of the HPV vaccine was a matter of intense, and intensely polarizing, political controversy in the U.S.
The reason HPV ended up becoming suffused with antagonistic cultural meanings had to do with ill-advised decisions, pushed for by the vaccine’s manufacturer and acquiesced in without protest by the FDA, that made it certain that members of the public would learn about the vaccine for the first time not from their pediatricians, as they had with the HBV vaccine, but from news reports on the controversy occasioned by a high-profile, nationwide campaign to secure legislative enactments of a “girls’ only STD shot” as a condition of school enrollment.
The risks associated with introducing the HPV vaccine in this manner were not only foreseeable but foreseen and even empirically studied at the time.
Warnings about this danger were not so much rejected as never considered—because there is no mechanism in place in the regulatory process for assessing how science-informed policymaking interacts with cultural meanings.
The U.S. is a pro-science culture to its core.
But it lacks a commitment to evidence-based methods and procedures for assuring that what is known to science becomes known to those whose decisions, individual and collective, it can profitably inform.
The “declining trust in science” trope is itself a manifestation of our evidence-free science communication culture.
Those who want to solve the science communication problem should resist this & all the other just-so stories that are offered as explanations of it.
They should also steer clear of those drawn to the playground-quality political discourse that features competing tallies of whose “side” is “more anti-science.”
And they should instead combine their energies to the development of a new political science of science communication that reflects an appropriately evidence-based orientation toward the challenge of enabling the members of a pluralistic liberal society to reliably recognize what’s known by science.
Does this show scientists today are suffering from lack of public trust? See exchange in comments — & add your interpretations of these and other data!