“the strongest evidence to date” on effect of “97% consensus” messaging

There’s a new study out on effect of “97% consensus” messaging.

Actually, it is a new analysis of data that were featured in an article published a few months ago in Climatic Change.

The earlier paper reported that after being told that 97% of scientists accept human-caused climate change, study subjects increased their estimate of the percentage of scientists who accept human-caused climate change.

The new paper reports results, not included in the earlier paper, on the effect of the study’s “97% consensus msg” on subjects’ acceptance of climate change, their climate change risk perceptions, and their support for responsive policy measures.

The design of the study was admirably simple:

  1. Ask subjects to characterize on a 0-100 scale their “belief certainty” that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it;
  2. tell the subjects that “97% of climate scientists have concluded that human-caused climate change is happening”; and
  3. ask the subjects to characterize again their “belief certainty” that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it.

Administered to a group of 1,104 members of the US population, the experiment produced these results on the indicated attitudes:

So what does this signify?

According to the authors,

Using pre and post measures from a national message test experiment, we found that all stated hypotheses were confirmed; increasing public perceptions of the scientific consensus causes a significant increase in the belief that climate change is (a) happening, (b) human-caused and (c) a worrisome problem. In turn, changes in these key beliefs lead to increased support for public action.

I gotta say, I just don’t see any evidence in these results that the “97% consensus msg” meaningfully affected any of the outcome variables that the authors’ new writeup focuses on (belief in climate change, perceived risk, support for policy).

It’s hard to know exactly what to make of  the 0-100 “belief certainty” measures. They obviously aren’t as easy to interpret as items that ask whether the respondent believes in human-caused climate change, supports a carbon tax etc.

In fact, a reader could understandably mistake the “belief certainty” levels in the table as %’s of subjects who agreed with one or another concrete proposition. To find an explanation of what the “0-100” values are actually measurements of, one has to read the Climatic Change paper– or actually, the on-line supplementary information for the Climatic Change paper.

Weirdly, the authors simply don’t report how the information affected the proportion of subjects who said they believe in climate change, human-caused or otherwise! If the authors have data on %s who believed in climate change before & after etc, I’m sure readers would actually be more interested in those….

But based on the “belief certainty” values in the table, it looks to me like the members of this particular sample, were, on average, somewhere between ambivalent and moderately certain about these propositions before they got the “97% consensus msg.”

After, they got the message, I’d say they were, on average,  … somewhere between ambivalent and moderately certain about these propositions.

From “75.19” to “76.88” in “belief certainty”: yes, that’s “increased support for policy action,” but it sure doesn’t look like anything that would justify continuing to spend milions & millions of dollars on a social marketing campaign that has been more or less continuously in gear for over a decade with nothing but the partisan branding of climate science to show for it.

The authors repeatedly stress that the results are “statistically significant.”

But that’s definitely not a thing significant enough to warrant stressing.

Knowing that the difference between something and zero is “statistically significant” doesn’t tell you whether what’s being measured is of any practical consequence.

Indeed, w/ N = 1,104, even quantities that differ from zero by only a very small amount will be “statistically significant.”

The question is, What can we infer from the results, practically speaking?

A collection of regression coefficients in a path diagram can’t help anyone figure that out.

Maybe there’s more to say about the practical magnitude of the effects, but unfortunately the researchers don’t say it.

For sure they don’t say anything that would enable a reader to assess whether the “97% message” had a meaningful impact on political polarization.

They say this:

While the model “controls” for the effect of political party, we also explicitly tested an alternative model specification that included an interaction-effect between the consensus-treatments and political party identification. Because the interaction term did not significantly improve model fit (nor change the significance of the coefficients), it was not represented in the final model (to preserve parsimony). Yet, it is important to note that the interaction itself was positive and significant (β = 3.25, SE = 0.88, t = 3.68, p < 0.001); suggesting that compared to Democrats, Republican subjects responded particularly well to the scientific consensus message.

This is perplexing….

If adding an interaction term didn’t “significantly improve model fit,” that implies the incremental explanatory power of treating the “97% msg” as different for Rs and Ds was not significantly different from zero. So one should view the effect as the same.

Yet the authors then say that the “interaction itself was positive and significant” and that therefore Rs should be seen as “respond[ing] particularly well” relative to Ds. By the time they get to the conclusion of the paper, the authors state that “the consensus message had a larger influence on Republican respondents,” although on what –their support for policy action? belief in climate change? their perception of % of scientists who believe in climate change? — is not specified….

Again, though, the question isn’t whether the authors found a correlation the size of which was “significantly different” from zero.

It’s whether the results of the experiment generated a practically meaningful result.

Once more the answer is, “Impossible to say but almost surely not.”

I’ll assume the Rs and Ds in the study were highly polarized “before” they got the “97% consensus msg” (if not, then the sample was definitely not a valid one for trying to model science communication dynamics in the general population).

But because the authors don’t report what the before-and-after-msg “belief certainty” means were for Rs and Ds, there’s simply no way to know whether the “97% consensus msg’s” “larger” impact on Rs meaningfully reduced polarization.

All we can say is that whatever it was on, the “larger” impact the msg had on Rs must still have been pretty darn small, given how remarkably unimpressive the changes were in the climate-change beliefs, risk perceptions, and policy attitudes for the sample as a whole.

Sigh….

The authors state that their “findings provide the strongest evidence to date that public understanding of the scientific consensus is consequential.”

If this is the strongest case that can be made for “97% consensus messaging,” there should no longer be any doubt in the minds of practical people–ones making decisions about how to actually do constructive things in the real world– that it’s time to try something else.

To be against “97% consensus messaging” is not to be against promoting public engagement with scientific consensus on climate change.

It’s to be against wasting time & money & hope on failed social marketing campaigns that are wholly disconnected from the best evidence we have on the sources of public conflict on this issue.

Leave a Comment

error: