Don’t select on the dependent variable in studying the science communication problem

I’ve talked about this before (in fact, there isn’t anything that I ever talk about that I haven’t talked about before, including having talked before about everything that I ever say), but it’s impossible to overemphasize the point that one will never understand the “science communication problem”—the failure of valid, widely accessible decision-relevant science to dispel controversy over risk and other facts to which that evidence directly speaks—if one confines one’s attention to instances of the problem.

If one does this—confines one’s attention to big, colossal, pitiful spectacles like the conflict over issues like climate change, or nuclear power, or the HPV vaccine, or gun control—one’s methods will be marred by a form of the defect known as “selecting on the dependent variable.”

“Selecting on the dependent variable” refers to the practice of restricting one’s set of observations to cases in which some phenomenon of interest has been observed and excluding from the set cases in which the phenomenon was not observed. Necessarily, any inferences one draws about the causes of such a phenomenon will then be invalid because in ignoring cases in which the phenomenon didn’t occur one has omitted from one’s sample instances in which the putative cause might have been present but didn’t generate the phenomenon of interest—an outcome that would falsify the conclusion.  Happens all the time, actually, and is a testament to the power of ingrained non-scientific patterns of reasoning in our everyday thought.

So to protect myself and the 14 billion regular readers of this blog from this trap, I feel obliged at regular intervals to call attention to instances of the absence of the sort of conflict that marks the science communication problem with respect to applications of decision-relevant science that certainly could—indeed, in some societies, in some times even have—generated such dispute.

To start, consider a picture of what the science communication problem looks like.

There is conflict among groups of citizens based on their group identities—a fact reflected in the bimodal distribution of risk perceptions.

In addition, the psychological stake that individuals have in persisting in beliefs that reflect and reinforce their group commitments is bending their reason. They are using their intelligence not to discern the best available evidence but to fit whatever information they are exposed to to the position that is dominant in their group. That’s why polarization actually increases as science comprehension (measured by “science literacy,” “numeracy,” “cognitive reflection” or any other relevant measure) magnifies polarization.

This sort of division is pathological, both in the sense of being bad for the well-being of a democratic society and unusual.

What’s bad is that where there is this sort of persistent group-based conflict, members of a pluralistic democratic society are less likely to converge on the best available evidence—no matter what it is. Those who “believe” in climate change get this—we ought to have a carbon tax or cap and trade or some other set of effective mitigation policies by now, they say, and would but for this pathology.

But if you happen to be a climate skeptic and don’t see why the pathology of cultural polarization over decision-relevant science is a problem, then you must work to enhance the power of your imagination.

Let me help you: do you think it is a good idea for the EPA to be imposing regulations on carbon emissions? For California to have its own cap & trade policy? If you don’t, then you should also be trying to figure out why so many citizens disagree with you (and should be appalled, just as believers should be, when you see ones of your own number engaging in just-so stories to try to explain this state of affairs).

You should also be worried that maybe your own assessments of what the best evidence is, on this issue or any other that reflects this pathology, might not be entitled to the same confidence you usually accord them (if you aren’t, then you lack the humility that alerts a critically reasoning person to the ever-present possibility of error on his or her part and the need to correct it), since clearly the normal forces that tend to reliably guide reflective citizens to apprehension of the best available evidence have been scrambled and disrupted here.

It doesn’t matter what position you take on any particular issue subject to this dynamic. It is bad for the members of a democratic society to be invested in positions on policy-relevant science on account of the stake that those individuals have in the enactment policies that reflect their group’s position rather than ones that reflect the best available evidence.

What’s unusual is that this sort of conflict is exceedingly rare. There are orders of magnitude more issues informed by decision-relevant science in which citizens with different identities don’t polarize.

On those issues, moreover, increased science comprehension doesn’t drive groups apart; on the contrary, it is clearly one of the driving forces of their convergence. Individuals reasonably look for guidance to those who share their commitments and who are knowledgeable about what’s known to science. Individuals with different group commitments are looking to different people—for the most part—but because there are plenty of highly science comprehending individuals in all the groups in which individuals exercise their rational faculty to discern who knows what about what, members of all theses groups tend to converge.

That’s the normal situation. Here’s what it looks like:

What’s normal here, of course, isn’t the shape of the distribution of views across groups. For all groups, positions on the risks posed by medical x-rays are skewed to the left—toward “low risk” (on the “industrial strength” risk-perception measure).

But these distributions are socially normal. There isn’t the bimodal distribution characteristic of group conflict. What’s more, increased science comprehension is in the same direction for all groups, and reflects convergence among the members of these groups who can be expected to play the most significant role in the distribution of knowledge.

Do these sorts of “pictures” tell us what to do to address the science communication problem? Of course, not.  Only empirical testing of hypothesized causes and corresponding strategies for dispelling the problem—and better yet avoiding it altogether—can.

My point is simply that one can’t do valid research of that sort if one “selects on the dependant variable” by examining only cases in which persistent conflict in the fact of compelling scientific evidence exists.

Such conflict is rare.  It is not the norm.  Moreover, any explanation for why we see it in the pathological cases that doesn’t also explain why we don’t in the nonpathological or normal ones is necessarily unsound.

Are you able to see why this is important?  Here’s a hint: it’s true that the “ordinary citizen” (whatever his or her views on climate change, actually) doesn’t have a good grasp of climate science; but his or her grasp of the physical science involved in assessing the dangers of x-ray radiation —not to mention the health science involved in assessing the risks of fluoridation of water or the biological science that informs pasteurization of milk, the toxicology that informs restrictions on formaldehyde in pressed wood products, the epidemiology used to assess the cancer risks of cell phones and high-power electrical power lines, and a host of additional issues that fit the “normal” picture—is no better.

We need to be testing hypotheses, then, on why the social and cognitive influences that normally enable individuals to orient themselves correctly (as individuals and as citizens) with respect to the best available evidence on these matters are not operating properly with regard to the pathological ones.

Leave a Comment

error: