Enjoyed the privilege and pleasure of delivering a lecture at the vibrant, bustling University of Nottingham last night. The culture that I and the audience members—students and faculty from the university and curious, critical-thinking members of the larger community—share creates an affinity between us that makes us more like one another than either of us is like most of the members of our respective societies. But of course the U.S. and U.K. both enjoy public cultures that enable those who see pursuit of knowledge and exchange of ideas as the best life–a truly peculiar notion in the eyes of the vast majority–to live it. Are we not morally obliged to reciprocate this benefit?
I wish I had spoken for less time so that I could have engaged my friends in discussion for longer. But slides here, and a reconstruction of my fuzzy recollection of what I said below.
0. The science communication problem. The science communication problem refers to the failure of valid, compelling, and accessible scientific evidence to dispel public conflict over risks and other policy-relevant facts to which that evidence applies. The climate change controversy is the most conspicuous instance of this phenomenon but is not the only one: historically nuclear power and chemical pesticides generated conflicts between expert and public understandings of risk; today disputes of GM foods in Europe and the HPV vaccine in the U.S. feature forms and levels of political controversy over facts that admit of empirical investigation as well.
Of course, no one should find it surprising that risk regulation and like forms of science-informed policymaking are politically contentious. Facts do not determine what to do; that depends on judgments of value, which naturally, appropriately vary among reasoning people in a free society.
But values don’t determine facts either. The answer to the question whether the earth’s temperature has increased in recent decades as a result of human activity turns on empirical evidence the proper understanding of which is the same whether one is an “individualist” or an “egalitarian,” a “liberal” or a “conservative,” a “Republican” or a “Democrat.”
Accordingly, whatever position one thinks the best evidence supports, one should be puzzled by the science communication problem. Indeed, one should be puzzled even if one thinks the best available evidence doesn’t clearly support any particular position: there’s no reason why people of diverse values should be unable to recognize that, much less for them to form positions in such circumstances that so strongly correlate with their views about the best way to live.
So what explains the science communication problem? And what, if anything, can be done about it?
I will describe evidence relating to two hypothesized explanations for the science communication problem, and then advance a set of normative and prescriptive claims based on what I think (for the time being, of course) is the account that the evidence most compellingly supports.
1. 2 hypotheses & some evidence. The dominant account of the science communication problem among both the academic and the popular commentators (including the many popular commentators who pose as scholarly ones) is the “public irrationality thesis” (PIT). PIT is related to the often-derided “knowledge deficit” theory—a position I’m not actually sure any serious scholar has ever advanced—but in fact puts more emphasis on the public’s capacity to give proper effect to scientific evidence of risk. Building on Kahneman’s popularization of the “system 1/system 2” conception of dual-process reasoning, PIT attributes public controversy over climate change and other societal risks to the public’s excessive reliance on unconscious, affect-driven heuristics (“system 1”) and its inability to engage in the conscious, effortful, analytic analysis (“system 2”) form that characterizes expert risk analysis.
If PIT proponents were trying to connect their understandeing to the evolving empirical evidence on public risk perceptions, they’d surely be qualifying their incessant, repitious, formulaic espousal of it. Those members of the public who display the greatest degree of “system 2” reasoning ability—are no more likely to hold views consistent with scientific consensus. Indeed, they are even more likely to be culturally and ideologically polarized than members of the public who are most disposed to use “system 1” heuristic forms of reasoning.
A second explanation for the science communication problem is the “cultural cognition thesis” (CCT). CCT posits that the stake individuals have in their status in affinity groups whose members share basic understandings of the best life can be expected to interact with the various psychological processes by which they make sense of evidence of risk. Supporting evidence includes studies showing that individuals much more readily perceive scientists to be “experts” worthy of deference on disputed societal risks when those scientists support than when they oppose the position that is predominant in individuals’ cultural group.
This selectivity can be expected to generate diverging perceptions of what expert consensus is on disputed risks. And, indeed, empirical evidence confirms this prediction. No cultural group believes that the position that is dominant in its group is contrary to scientific consensus—and across the run of disputed societal risks, all of the groups can be shown to be poorly informed on the state of expert opinion.
The magnification of polarization associated with the disposition to engage in “system 2” forms of information processing also fits CCT. Individuals who are adept at engaging empirical evidence have a resource that those who must rely more on “system 1” substitutes lack for ferreting out evidence that supports their group’s position and rationalizing away the evidence that doesn’t.
2. The tragedy of the science communications commons. PIT, then, has matters essentially upside down. The source of the science communication problem is not too little rationality on the part of the public but rather too much. The behavior of an ordinary individual as a consumer, a voter, or an advocate, etc., can have no material impact on the level of risk that person or anyone else faces from climate change. But if he or she forms a position on that issue that is out of keeping with the one that predominates in that person’s group, he or she faces a considerable risk of estrangement from communities vital to his or her psychic and material well-being. Under these conditions, a rational actor can be expected to attend to information in a manner that is geared more reliably to forming group-congruent than science-congruent risk perceptions. And those who are highest in critical reasoning dispositions will do an even better job than those whose “bounded rationality” leave them unable to recognize the evidence that supports their groups’ position or to resist the evidence that undermines it.
But as individually rational as this form of information processing is, it is collectively irrational for everyone to engage in it simultaneously. For in that case, the members of a self-governing society are less likely to converge or converge as quickly as they otherwise would on the best available evidence.
Yet even that won’t make it any more rational for an individual to attend to information in a manner reliably geared to forming science- as opposed to group-congruent beliefs—because, again, nothing he or she does based on a “correct” understanding will make any difference anyway.
This misalignment of individual and collective interests in the formation of risk perceptions consistent with the best available evidence is the tragedy of the science communications commons.
3. A polluted science communication environment. The signature attributes of the science communication problem—the correlation between perceptions of risk and group-defining values, and the magnification of this effect by greater reasoning proficiency—is pathological. It is not only harmful, but unusual. The number of societal risks that reflect this pattern relative to the number that do not is tiny.
In the cases in which diverse members of the public converge on the best available evidence, the reason is not that they genuinely comprehend that evidence. Individuals must, not only to live well but simply to live, accept as known by science much more than they could ever make sense of, much less verify, on their own.
Ordinary individuals manage to align themselves appropriately with decision-relevant science essential to their individual and collective well-being not by becoming experts in substantive areas of knowledge but by becoming experts in identifying who knows what about what. Nullius in verba—or “take no one’s word for it,” the motto of the Royal Society—is charming but silly if taken literally. What’s essential is to take the word only of those whose knowledge has been attained by the methods of ascertaining knowledge distinctive of science.
The remarkable ability that ordinary members of the public—ones of diverse reasoning dispositions as well as diverse values—to reliably identify who knows what about what breaks down, however, when positions on issues become entangled in meanings that transform them into symbols of group identity and loyalty. At that point, the stake individuals have in forming group-congruent beliefs will dominate the stake they have in forming science-congruent ones.
Such meanings, then, are a kind of pollution in the science communication environment. They disable the normally reliable faculties that individuals use to ascertain what is known to science.
4. “. . . a new political science . . .” (a) Risks are not born with antagonistic cultural meanings but rather acquire them through one or another set of events that might well have turned out otherwise.
It wasn’t inevitable, for example, that the HPV vaccine would acquire the divisive association with contested norms on gender, sexuality, and parental autonomy that polarized opposing groups’ perceptions of its risks and benefits in the U.S. The HBV vaccine also confers immunity from a sexually transmitted disease that causes cancer (hepatitis-b), and the CDC’s recommendation to add it to the schedule of vaccinations required as a condition of middle school enrollment generated no meaningful controversy among culturally diverse citizens—over 90% of whose children received the shot every year during which the states were embroiled in controversy over making the HPV shot mandatory.
The antagonistic cultural meanings that fuel political controversy over GM foods in Europe aren’t inevitable either. They are completely absent in the U.S.
(b) The same methods that scholars of public risk perception use to make sense of these differences, moreover, can be used to forecast the conditions that make one or another emerging technology—such as synthetic biology or nanotechnology—vulnerable to becoming suffused with such meanings. Action can then be taken to steer these technologies down a safer path—not for the purpose of making members of the public believe they are or aren’t genuinely hazardous, but rather for the purpose of assuring that members of the public will reliably recognize the best available evidence on exactly that.
Indeed, the danger of cultural polarization associated with the path the HPV vaccine traveled in being introduced to the public was forecast with such methods, which corroborated the warnings of numerous health professionals and others.
This evidence wasn’t rejected; it simply wasn’t considered. There’s was no mechanism in any part of the drug-regulatory approval process for anyone to present, or any institution to act, on evidence on the hazards associated with fast-track approval of a girls-only STD vaccine combined with a high-profile nationwide campaign in state legislatures to make the vaccine mandatory.
(c) Without systematic procedures to acquire and intelligently use scientific knowledge to protect the science communication environment, its contamination is inevitable.
The inevitable danger of such conflicts is built into the constitution of the Liberal Republic of Science. The same institutions and culture of political freedom that fuel the engine of competitive conjecture and refutation that drives science assure—mandate—that there by no single institution endowed with the authority to certify what is known to science. But the immensity and complexity of what is known cannot certify or announce itself; the idea that it can is the sentimental, sociologically and epistemologically naïve variant of nullius in verba.
In the Open Society there will be a plurality of certifiers—in the form of communities of free individuals associating with others with whom they have converged in the exercise of their reason on a shared understanding of the best way to live.
This dynamic, unregulated, pluralistic system of certification of what is known to science works in the vast run of cases!
Yet it is inevitable—statistically!–that it sometimes won’t: the sheer enormity of things that science can discern in a free society & the non-zero probability that any one of those can become entangled in antagonistic cultural meanings mean that risk regulation will remain a permanent site of illiberal forms of status competition among the plurality of cultural groups in which free, reasoning individuals form their understanding of what is known to science. This is Popper’s revenge . . . .
It is foolish (an embarrassing display of shallow thinking combined with indulgence of tribal chauvinism) to blame “profit-mongering corporations” or “political extremists” for disasters like the one that occurred with the introduction of the HPV vaccine in the U.S. ” Until we—the citizens of the Liberal Republic of Science—use our reason and exercise our will to create a common culture of evidence-based science communication dedicated to protecting the science communication environment, we are destined to suffer the reason-effacing, welfare-enervating, freedom-annihilating spectacle of cultural conflict over risk.
(d) Writing at the birth of liberal democracy, Tocqueville famously remarked the need for “a new political science for a world itself quite new.”
Today we need a new political science—a science of science communication –dedicated to protecting the process by which plural communities of free and reasoning individuals certify to themselves what is known by science.
We must use our reason to protect the historic condition of freedom and the unprecedented immensity of collective knowledge that are the reciprocal defining features of the Liberal Republic of Science.