The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)

This is post no. 2 on the question “Is cultural cognition a bias,” to which the answer is, “nope—it’s not even a heuristic; it’s an integral component of human rationality.”

Cultural cognition refers to the tendency of people to conform their perceptions of risk and other policy-consequential facts to those that predominate in groups central to their identities. It’s a dynamic that generates intense conflict on issues like climate change, the HPV vaccine, and gun control.

Those conflicts, I agree, aren’t good for our collective well-being. I believe it’s possible and desirable to design science communication strategies that help to counteract the contribution that cultural cognition makes to such disputes.

I’m sure I have, for expositional convenience, characterized cultural cognition as a “bias” in that context. But the truth is more complicated, and it’s important to see that—important, for one, thing, because a view that treats cultural cognition as simply a bias is unlikely to appreciate what sorts of communication strategies are likely to offset the conditions that pit cultural cognition against enlightened self-government.

In part 1, I bashed the notion—captured in the Royal Society motto nullius in verba, “take no one’s word for it”—that scientific knowledge is inimical to, or even possible without, assent to authoritative certification of what’s known.

No one is in a position to corroborate through meaningful personal engagement with evidence more than a tiny fraction of the propositions about how the world works that are collectively known to be true. Or even a tiny fraction of the elements of collective knowledge that are absolutely essential for one to accept, whether one is a scientist trying to add increments to the repository of scientific insight, or an ordinary person just trying to live.

What’s distinctive of scientific knowledge is not that it dispenses with the need to “take it on the word of” those who know what they are talking about, but that it identifies as worthy of such deference only those who are relating knowledge acquired by the empirical methods distinctive of science.

But for collective knowledge (scientific and otherwise) to advance under these circumstances, it is necessary that people—of all varieties—be capable of reliably identifying who really does know what he or she is talking about.

People—of all varieties—are remarkably good at doing that. Put 100 people in a room and tell them to perform, say, a calculus problem and likely one will genuinely be able to solve it and four mistakenly believe they can.  Let the people out 15 mins later, however, and it’s pretty likely that all 100 will know the answer. Not because the one who knew will have taught the other 99 how to do calculus. But because that’s the amount of time it will take the other 99 to figure out that she (and none of the other four) was the one who actually knew what she was talking about.

But obviously, this ability to recognize who knows what they are talking about is imperfect. Like any other faculty, too, it will work better or worse depending on whether it is being exercised in conditions that are congenial or uncongenial to its reliable functioning.

One condition that affects the quality of this ability is cultural affinity. People are likely to be better at “reading” people—at figuring out who really knows what about what—when they are interacting with others with whom they share values and related social understandings. They are, sadly, more likely to experience conflict with those whose values and understandings differ from theirs, a condition that will interfere with transmission of knowledge.

As I pointed out in the last post, cultural affinity was part of what enabled the 17th and early 18th Century intellectuals who founded the Royal Society to overturn the authority of the prevailing, nonempiricial ways of knowing and to establish in their stead science’s way. Their shared values and understandings underwrote both their willingness to repose their trust in one another and (for the most part!) not to abuse that trust. They were thus able to pool, and thus efficiently build on and extend, the knowledge they derived through their common use of scientific modes of inquiry.

I don’t by any means think that people can’t learn from people who aren’t like them. Indeed, I’m convinced they can learn much more when they are able to reproduce within diverse groups the understandings and conventions that they routinely use inside more homogenous ones to discern who knows what about what. But evidence suggests that the processes useful to accomplish this widening of the bonds of authoritative certification of truth are time consuming and effortful; people sensibly take the time and make the effort in various settings (in innovative workplaces, e.g., and in professions, which use training to endow their otherwise diverse individuals with shared habits of mind). But we should anticipate that the default source of “who knows what about what” will for most people most of the time be communities whose members share their basic outlooks.

The dynamics of cultural cognition are most convincingly explained, I believe, as specific manifestations of the general contribution that cultural affinity makes to the reliable, every-day exercise of the ability of individuals to discern what is collectively known.  The scales we use to measure cultural worldviews likely overlap with a large range of more particular, local ties that systematically connect individuals to others with whom they are most comfortable and most adept at exercising their “who knows what they are talking about” capacities.

Normally, too, the preference of people to use this capacity within particular cultural affinity groups works just fine.

People in liberal democratic societies are culturally diverse; and so people of different values will understandably tend to acquire access to collective knowledge within a large number of discrete networks or systems of certification. But for the most part, those discrete cultural certification systems can be expected to converge on the best available information known to science. This has to be so; for no cultural group that consistently misled its members on information of such vital importance to their well-being could be expected to last very long!

The work we have done to show how cultural cognition can polarize people on risks and other policy-relevant facts involve pathological cases. Disputes over matters like climate changenuclear power, the HPV vaccine, and the like are pathological both in the sense of being bad for people—they make it less likely that popularly accountable institutions will adopt policies informed by the best available information—and in the sense of being rare: the number of issues that admit of scientific investigation that generate persistent divisions across the diverse networks of cultural certification of truth are tiny in relation to the number that reflect the convergence of those same networks.

An important aim of the science of science communication is to understand this pathology. CCP studies suggest that they arise in cases in which facts that admit of scientific investigation become entangled in antagonistic cultural meanings—a condition that creates pressures (incentives, really) for people selectively to seek out and credit information conditional on it supporting rather than undermining the position that predominates in their own group.

It is possible, I believe, to use scientific methods to identify when such entanglements are likely to occur, to structure procedures for averting such conditions, and to formulate strategies for treating the pathology of culturally antagonistic meanings when preventive medicine fails. Integrating such knowledge with the practice of science and science-informed policymaking, in my opinion, is vital to the well-being of liberal democratic societies.

But for the reasons that I’ve tried to suggest in the last two posts, this understanding of what the science of science communication can and should be used to do does not reflect the premise that cultural cognition is a bias. The discernment of “who knows what about what” that it enables is essential the ability of our species to generate scientific knowledge and for individuals to participate in what is known to science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

We need to use science to protect the science communication environment from toxins that disable us from using faculties integral to our rationality. An essential step in the advance of this science is to overcome simplistic pictures of what our rationality consists in.

Part 1

Leave a Comment

error: