“Fake news”–enh. “Alternative Facts presidency”–watch out! (Talk summary & slides)

My remarks, rationally reconstructed, at the AAAS Panel on “Fake News and Social Media: Impacts on Science Communication and Education” (slides here).

1. Putting the bottom line on top.  If one is trying to assess the current health of science communication in our society, then he or she should likely regard the case of “fake news” as akin to a bad head cold.

The systematic propogation of false information that President Trump is engaged in, on the other hand, is a cancer on the body politic of enlightened self-government.

2. Conjectures inviting refutation. I’ll tell you why I see the “alternative facts presidency” as so much more serious than “fake news.” But before I continue, I want to issue a proviso: namely, that everything I think on these matters is in the nature of informed conjecture.

I will be drawing on the dynamic of identity-protective reasoning to advance my claims (Flynn et al. 2017; Kahan 2010). Because we have learned so much about mass opinion from studies featuring this dynamic, it makes perfect sense to suspect this form of information processing will determine how people react to fake news and to the stream of falsehoods that flow continuously from the Trump administration.

But we should recognize that these phenomena are different from the ones that have supplied the focus for the study of identity-protective reasoning.

Other dynamics—including ones that also reflect well-established mechanisms of cognition—might support competing hypotheses.

Accordingly, it’s not appropriate to stand up in front of you and say “here is what social science tells us about fake news and presidential misinformation . . . .”  Social science hasn’t spoken yet. Unless he or she has data that directly address these phenomena, anyone who tells you that “social science says” this or that about “fake news” is engaged in story-telling, a practice that can itself mislead the public and distort scholarly inquiry.

I will, for purposes of exposition, speak with a tone of conviction.  But I’m willing to do that only because I can now be confident that you’ll understand my position to be a provisional one, reflecting how things look to me at the Bayesian periphery of a frontier that warrants (demands) empirical exploration. Once valid studies start to accumulate, I am prepared to pull up stakes and move in the direction they prescribe, should it turn out that the ground I’m standing on now is insecure.

3.  Models.  I’m going to use two simple models to guide my exposition.  I’ll call one the “passive aggregator theory” (PAT).  PAT envisions a credulous public that is pushed around by misinformation emanating from powerful economic and political interest groups.

That model, I will contend, is simply wrong.

The truth is something closer to the second model I want you to consider.  This one can be called the “motivated public theory” (MPT).  According to MPT, members of the public are unconsciously impelled to seek out information that supports the view of the identity-defining group they belong to and to dismiss as non-credible any information that challenges that position.

Where the public is motivated to see things in an identity-reinforcing way, it will be very profitable to create misinformation that gives members of the public what they want—namely, corroboration that their group’s positions are right, and those of their benighted rival wrong.

In my view, that’s what the fake news we saw during the election was all about.  Some smart people in Macedonia or wherever set up sites with scandalous—in fact, outright incredible—headlines to direct traffic to websites that had agreed to pay them to do exactly that.  Indeed, every fake news story was ringed with classic click bait features on overcoming baldness, restoring wrinkled skin, curing erectile dysfunction, and the like.

On the MPT account, the only people who’d be enticed to read such material would be people already predisposed to believe (or maybe fantasize) that the subjects of the stories (Hillary Clinton and Donald Trump, for the most part) were evil or stupid enough to engage in the behavior the stories describe. The incremental effect of these stories in shaping their opinions would be nil.

Same for those predisposed not to believe the stories.  They’d be unlikely to see most of them because of the insularity of political-news networks in social media. But even if they saw them, they’d dismiss them out of hand as noncredible.

On net, no one’s view of the world would change in any meaningful way.

4. Empirics. Consider some data that makes a conjecture like this plausible.

a. In the study (Kahan et al., in press), ordinary members of the public were instructed to determine the results of an experiment by looking at a two-by-two contingency table.  The right way to interpret information presented in this form (a common one for presenting experimental research) is to look at the ratios of positive to negative impacts conditional on the treatment.  The subjects who did this would get the correct answer.

But most people don’t correctly interpret 2×2 contingency tables or alternative formulations that convey the same information. Instead the simply compare the number of positive and negative results in the cells for the treatment condition. Or if they are a little smarter, they do that and look at the number of positive results in both the treatment and the untreated control.

Anyone following that strategy would get the “wrong” answer.

The design also had an experimental component. Half the subjects were told that the 2×2 summarized results—better or worse complexions—for a new skin-rash treatment.  The other half that it reflected the results—violent crime up versus violent crime down—of a law that permitted citizens to carry concealed weapons in public.

In the skin-rash condition, the likelihood of getting the answer right turned only on the Numeracy (quantitative-rezoning proficiency) of the subjects, regardless of whether were right-leaning or left-.

But in the gun-control condition, high-numeracy subjects were likely to get the answer right only  when the data, properly interpreted, supported the position that was dominant in their ideological group. When the data, property interpreted supported their ideological rival’s position, the subjects highest in Numeracy were no more likely to get the answer correct than those who were low in Numeracy. Essentially they used their reasoning proficiencies to pry open a confabulatory escape hatch to the logic trap they found themselves trapped in.

As a result, the highest Numeracy subjects were the most divided on what the data signified.

This is a result consistent with MPT.  If it captures the way that people reason outside the lab, then we should expect to see not only that members of opposing affinity groups are polarized on contentious empirical issues. We should expect to see the degree of polarization between their members increasing in lockstep with diverse citizens’ science comprehension capacities.

And indeed, that is what we see (Kahan 2016).

b. Now consider the significance of this for fake news.

From this simple model, we can see how identity-protective reasoning can profoundly divide opposing cultural groups.  Yet no one was being misled about the relevant information. Instead, the subjects were misleading themselves—to avoid the dissonance of reaching a conclusion contrary to their political identifies.

Nor was the effect a result of credulity or any like weakness in critical reasoning.

On the contrary, the very best reasoners—the ones best situated to make sense of the evidence—were the ones who displayed the strongest tendency toward identity-protective reasoning.

Because biased information-search is also a consequence of identity-protective cognition, we should expect that people who reason this way will be much more likely to encounter information that reinforces rather than undermines their predispositions.

Of course, people might now and again stumble across “fake news” that goes against their predispositions, too.  But because we know such people are already disposed to bend even non-misleading information into a shape that affirms rather than threatens their identities, there is little reason to expect them to credit “fake news” when the gist of it defies their political preconceptions.

These are inferences that support MPT over PAT.

5. As I stated the outset, we shouldn’t equate the Trump Administration’s persistent propagation of misinformation with the misinformation of the cartoonish “fake news” providers.  The latter, I’ve just explained, are likely to have only a small or no effect on the science communication environment; the former, however, fills that environment with toxins that enervate human reason.

Return to the “motivated public theory.” We shouldn’t be satisfied to treat a “motivated public” as exogenous. How do people become motivated, identity-protective reasoners?

They aren’t, after all, on myriad issues (e.g., GM foods) on which we could easily imagine conflict—indeed, on whether there actually is in other places (e.g., GM foods in Europe).

A likely answer, my collaborators and I concluded in a recently published study (Kahan et al. 2017), is the advent of culturally toxic memes.

Memes are self-propagating ideas or practices that enjoy wide circulation by virtue of their salience.

Culturally toxic memes are ones that fuse positions on risks or similar policy-relevant facts to individual identities. The operate primarily by stigmatizing those who hold such positions as stupid and evil.

When that happens, people gravitate toward habits of mind that reinforce their commitment to their groups’ positions. They do that because holding a position consistent with others in their groups is more important to them—more consequential for their well-being—than is holding a positon that is correct.

What an ordinary member of the public thinks about climate change, e.g.,  will not affect the risk that it poses to her or to anyone she cares The impact she as an individual consumer or an individual voter will be too small to make any real difference.

But given what holding such a position has come to signify about who one is—whose side one is on in a vicious struggle between competing groups for cultural ascendency—forming a belief (an attitude, really) that estranges her from her peers could have devastating psychic and material consequences.

Of course, when everyone resorts to this form of reasoning simultaneously, we’re screwed.  Under these conditions, citizens of pluralistic democratic society will fail to converge, or converge as quickly as they should, on valid empirical evidence about the dangers they face and how to avert them (Kahan et al. 2012).

The study we conducted modeled how exposure to toxic memes (ones linking the spread of Zika to global warming or to illegal immigrants) could rapidly polarize cultural groups that are now largely in agreement about the dangers posed by the Zika virus.

This is why we should worry about Trump: his form of misinformation, combined with the office that he holds, makes him a toxic-meme propagator of unparalleled influence.

When Trump spews forth with lies, the media can’t simply ignore him, as they would a run-of-the-mill crank. What the President of the United States says always compels coverage.

Such coverage, in turn, impels those who want to defend the truth to attack Trump in order to try to undo the influence his lies could have on public opinion.

But because the ascendency of Trump is itself a symbol of the status of the cultural groups that propelled him to the White House, any attack on him for lying is likely to invest his position with the form of symbolic significance that generates identity-protective cognition: the fight communicates a social meaning—this is what our group believes, and that what our enemies believe—that drowns out the facts (Nyhan et al 2010, 2013).

We aren’t polarized today on the safety of universal childhood immunization (Kahan 2013; CCP 2014). But we could easily become so if Trump continues to lie about the connection between vaccinations and autism.

We aren’t polarized today on the means appropriate to counteract the threat of the Zika virus (Kahan et al. 2017).  But if Trump tries to leverage public fear of Zika into support for tightening immigration laws, we could become politically polarized—and cognitively impeded from recognizing the best scientific evidence on spread of this disease.

Trump is uniquely situated, and apparently emotionally or strategically driven, to enlarge the domain of issues on which this reason-effacing dynamic degrades our society’s capacity to recognize and give proper effect to decision-relevant science.

6.  Trump, in sum, is our nation’s science-communication environment polluter-in-chief. We shouldn’t let concern over “fake news” on Facebook to distract us from the threat he uniquely poses to enlightened self-government or from identifying the means by which the threat his style of political discourse can be repelled.

Refs

CCP, Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Experimental Investigation (Jan. 27, 2014).

Flynn, D.J., Nyhan, B. & Reifler, J. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology 38, 127-150 (2017).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. Culturally antagonistic memes and the Zika virus: an experimental test. J Risk Res 20, 1-40 (2017).

Kahan, D.M. The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It. in Emerging Trends in the Social and Behavioral Sciences (John Wiley & Sons, Inc., 2016).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Behavioural Public Policy  (in press).

Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav 32, 303-330 (2010).

Nyhan, B., Reifler, J. & Ubel, P.A. The Hazards of Correcting Myths About Health Care Reform. Medical Care 51, 127-132 110.1097/MLR.1090b1013e318279486b (2013).

Leave a Comment

error: