The culturally polarizing effect of the “anti-science trope” on vaccine risk perceptions

The “ ‘anti-science’ trope” refers to a common theme in ad hoc risk communication that links concern about vaccine risks to disbelief in evolution and climate skepticism, all of which are cited as instances of a creeping hostility to science in the U.S. general public or at least some component of it.

In the last post, I presented evidence, collected as part of the CCP Vaccine Risk Perception study, that showed that the trope has no meaningful connection to fact.

Those who accept and reject human evolution, those who believe in and those who are skeptical about climate change, all overwhelmingly agree that vaccine risks are low and vaccine benefits high.

The idea that either climate change skepticism or disbelief in evolution denotes hostility to science or lack of comprehension of science is false, too. That’s something that a large number of social science studies show.  The CCP Vaccine Risk study doesn’t add anything to that body of evidence.

But the CCP Vaccine Risk study did examine how differences in science comprehension and religiosity, which interact in an important way in disputes over climate change and evolution, don’t have any meaningful impact on perceptions of vaccine risk perceptions.

In addition to examining whether there was any factual substance to the anti-science trope, the CCP Vaccine Risk Perception study also investigated what the impact of the trope is—or at least could be if it were propagated widely enough—on public opinion.

For that purpose, the study used experimental methods. The experiment had three key elements.

First was a measurement of subjects’ cultural predispositions toward societal risks.

 I’ve actually described the strategy used to do so in several earlier posts.  But basically, the experiment used an “interpretive community” strategy, in which unobserved or latent group predispositions are extracted from subjects perceptions of a host of societal risks that are known to divide people with diverse cultural and political outlooks. This approach, as I’ve explained, furnishes the “highest resolution” for measuring the influence group predispositions might be having on perceptions of a risk on which there is reason to believe the impact might be small.

That analysis identified two cross-cutting or orthogonal dimensions along which risk predispositions could be measured.  I labeled them the “public safety” and “social deviancy” dimensions, based on their respective indicators (various environmental risks, guns, second-hand smoke in the former case; legalization of marijuana and prostitution and teaching of high school sex ed in the latter).

Subjects in the diverse 2,300-person sample of U.S. adults could thus be assigned to one of four “interpretive communities” (ICs) based on their score relative to the mean of both of these two “risk perception dimensions”: IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “high social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”).  The intensity of the study subjects’ commitment to one or the other of these groups can be measured by their scores on the public-safety and societal-deviancy risk-perception scales.

The second element was exposure of the subjects to examples of “ad hoc risk communication.”

The subjects were assigned to experimental conditions or groups, each of which read a different communication patterned on information in the media or internet.

One of these communications used the “anti-science trope.” Patterned on real-world communications (including ones reproduced in the Appendix to the Report), it was in the form of an op-ed that described disbelief in evolution, climate skepticism, and the belief that vaccines cause autism as progressive manifestations of a mutating “anti-science virus.” As is so for most real-world communications embodying the anti-science trope, the experiment communication displayed an unmistakably partisan orientation and conveyed contempt for members of the public who are skeptical of climate change and disbelieve evolution.

The third element was measurement of the subjects’ perceptions of vaccine risks and benefits.

The study used a large battery of risk and benefit items, which were combined into a highly reliable scale, “PUBLIC_HEALTH” (Cronbach’s α = 0.94), scores of which were transformed into z-scores (i.e., normalized so that increments reflected standard deviations from the mean) and coded so that lower ones denoted relatively negative assessments of vaccines and higher scores relatively positive ones.

In the experiment, then, the risk perceptions of subjects exposed to different forms of “ad hoc risk communication” were compared to the perceptions of survey participants, who were assigned to read a news story unrelated to vaccines and whose members served as the “control” group.

The results . . . .

As previewed in an earlier blog post, the study found that among members of the control group there was no practically meaningful relationship between  vaccine risk perceptions and the cultural risk predispositions measured by the “public safety” and “social deviance” IC dimensions.  IC-αs (“high public-safety,” “low social-deviancy”) scored highest on PUBLIC_HEALTH and IC- δs the lowest.  But the difference between them was trivially small—less than one-third of one standard deviation of the mean score.

As a measure of the practical difference in these scores, the predicted probability of agreeing that the “benefits of obtaining generally recommended childhood vaccinations outweigh the health risks” was estimated to be 84% (± 3%, LC = 0.95) for a typical IC‑α and 74% ( ± 4%) for a typical IC‑δ.

This was consistent with the findings of the Vaccine Risk Perception study’s survey component generally, which found that there is broad-based consensus, even among groups that are bitterly divided on issues like climate change and evolution, that vaccine benefits are high and their risks low.  As of today, at least, vaccine risks are not culturally polarizing.

But that could change, the experiment results suggested.

This very modest difference in the perceptions of subjects displaying the IC-α  and IC-δ risk disposition widened significantly among their counterparts in the “anti-science trope” condition. Exposure to the “anti-science” op-ed also drove a wedge between subjects displaying the IC-β (“high public-safety,” “high social-deviancy) and IC-γ (“low public-safety,” “low social-deviancy”) dispositions, groups whose scores on the PUBLIC_HEALTH scale were indistinguishable in the control.

The practical significance of the difference can be illustrated by examining the impact of the experiment on the predicted probability of agreement with the item measuring “confidence in the judgment of the American Academy of Pediatrics that vaccines are a ‘ safe and effective way to prevent serious disease.’ ” Subjects responded to this item immediately after reading a statement issued by the AAP on vaccine testing and safety. The predicted probability that a subject with a typical IC-δ disposition would indicate a positive level of confidence dropped from 73% (± 4%, LC = 0.95) in the control to 64% (± 7%, LC = 0.95) in “anti-science”; the gap between the predicted probability of a positive assessment by a typical IC-δ and a typical IC-α grew 14% (± 9%) in the two conditions. The gap between the typical IC-β and both the typical IC-α (7%; ± 7%, LC = 0.95) and typical IC-δ (6%; ± 6%, LC = 0.95) grew, too, but by a more modest level. As one would expect, similar divisions characterized responses to other items in the PUBLIC_HEALTH scale.

There was no similar decrease in the predicted probability that a typical IC-δ would express a positive level of confidence in the other experiment conditions, one of which which featured a composite news story proclaiming an impending public health crisis from “declining vaccine rates,” and another of which a communication patterned on a typical CDC press release that conveyed accurate information on the high and steady level of vaccine rates in the U.S. in the last decade.  But as discussed in a previous post, subjects in the “crisis” condition, not surprisingly, grossly overestimated the degree of parental resistance to universal immunization—an effect that could negatively affect reciprocal motivations to contribute to the public good of herd immunity.

It is important to realize that the polarizing impact of the “Anti-science” op-ed resulted both from the positive effect it had on the vaccination attitudes of IC-α subjects and the negative effect it had on IC-δ ones. The overall effect of the “Anti-science” treatment was negligible.

The practical importance of the result, then, turns on the significance attached to the intensified levels of disagreement among subjects of diverse outlooks.

Previous CCP studies, including one involving controversy over the HPV vaccine, suggest that the status of a putative risk source as a symbol or focus of cultural contestation is what disrupts the social processes that ordinarily result in public convergence on the best available evidence relating to societal and health risks.

If this is correct, then any influence that intensifies differences among such groups should be viewed with great concern.

The “anti-science trope,” in sum, is not just contrary to fact.  It is contrary to the tremendous stake that the public has in keeping its vaccine science communication environment free of reason-effacing forms of pollution.

Leave a Comment

error: