There’s a new study out on effect of “97% consensus” messaging.
Actually, it is a new analysis of data that were featured in an article published a few months ago in Climatic Change.
The earlier paper reported that after being told that 97% of scientists accept human-caused climate change, study subjects increased their estimate of the percentage of scientists who accept human-caused climate change.
The new paper reports results, not included in the earlier paper, on the effect of the study’s “97% consensus msg” on subjects’ acceptance of climate change, their climate change risk perceptions, and their support for responsive policy measures.
The design of the study was admirably simple:
- Ask subjects to characterize on a 0-100 scale their “belief certainty” that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it;
- tell the subjects that “97% of climate scientists have concluded that human-caused climate change is happening”; and
- ask the subjects to characterize again their “belief certainty” that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it.
Administered to a group of 1,104 members of the US population, the experiment produced these results on the indicated attitudes:
So what does this signify?
According to the authors,
Using pre and post measures from a national message test experiment, we found that all stated hypotheses were confirmed; increasing public perceptions of the scientific consensus causes a significant increase in the belief that climate change is (a) happening, (b) human-caused and (c) a worrisome problem. In turn, changes in these key beliefs lead to increased support for public action.
I gotta say, I just don’t see any evidence in these results that the “97% consensus msg” meaningfully affected any of the outcome variables that the authors’ new writeup focuses on (belief in climate change, perceived risk, support for policy).
It’s hard to know exactly what to make of the 0-100 “belief certainty” measures. They obviously aren’t as easy to interpret as items that ask whether the respondent believes in human-caused climate change, supports a carbon tax etc.
In fact, a reader could understandably mistake the “belief certainty” levels in the table as %’s of subjects who agreed with one or another concrete proposition. To find an explanation of what the “0-100” values are actually measurements of, one has to read the Climatic Change paper– or actually, the on-line supplementary information for the Climatic Change paper.
Weirdly, the authors simply don’t report how the information affected the proportion of subjects who said they believe in climate change, human-caused or otherwise! If the authors have data on %s who believed in climate change before & after etc, I’m sure readers would actually be more interested in those….
But based on the “belief certainty” values in the table, it looks to me like the members of this particular sample, were, on average, somewhere between ambivalent and moderately certain about these propositions before they got the “97% consensus msg.”
After, they got the message, I’d say they were, on average, … somewhere between ambivalent and moderately certain about these propositions.
From “75.19” to “76.88” in “belief certainty”: yes, that’s “increased support for policy action,” but it sure doesn’t look like anything that would justify continuing to spend milions & millions of dollars on a social marketing campaign that has been more or less continuously in gear for over a decade with nothing but the partisan branding of climate science to show for it.
The authors repeatedly stress that the results are “statistically significant.”
But that’s definitely not a thing significant enough to warrant stressing.
Knowing that the difference between something and zero is “statistically significant” doesn’t tell you whether what’s being measured is of any practical consequence.
Indeed, w/ N = 1,104, even quantities that differ from zero by only a very small amount will be “statistically significant.”
The question is, What can we infer from the results, practically speaking?
A collection of regression coefficients in a path diagram can’t help anyone figure that out.
Maybe there’s more to say about the practical magnitude of the effects, but unfortunately the researchers don’t say it.
For sure they don’t say anything that would enable a reader to assess whether the “97% message” had a meaningful impact on political polarization.
They say this:
While the model “controls” for the effect of political party, we also explicitly tested an alternative model specification that included an interaction-effect between the consensus-treatments and political party identification. Because the interaction term did not significantly improve model fit (nor change the significance of the coefficients), it was not represented in the final model (to preserve parsimony). Yet, it is important to note that the interaction itself was positive and significant (β = 3.25, SE = 0.88, t = 3.68, p < 0.001); suggesting that compared to Democrats, Republican subjects responded particularly well to the scientific consensus message.
This is perplexing….
If adding an interaction term didn’t “significantly improve model fit,” that implies the incremental explanatory power of treating the “97% msg” as different for Rs and Ds was not significantly different from zero. So one should view the effect as the same.
Yet the authors then say that the “interaction itself was positive and significant” and that therefore Rs should be seen as “respond[ing] particularly well” relative to Ds. By the time they get to the conclusion of the paper, the authors state that “the consensus message had a larger influence on Republican respondents,” although on what –their support for policy action? belief in climate change? their perception of % of scientists who believe in climate change? — is not specified….
Again, though, the question isn’t whether the authors found a correlation the size of which was “significantly different” from zero.
It’s whether the results of the experiment generated a practically meaningful result.
Once more the answer is, “Impossible to say but almost surely not.”
I’ll assume the Rs and Ds in the study were highly polarized “before” they got the “97% consensus msg” (if not, then the sample was definitely not a valid one for trying to model science communication dynamics in the general population).
But because the authors don’t report what the before-and-after-msg “belief certainty” means were for Rs and Ds, there’s simply no way to know whether the “97% consensus msg’s” “larger” impact on Rs meaningfully reduced polarization.
All we can say is that whatever it was on, the “larger” impact the msg had on Rs must still have been pretty darn small, given how remarkably unimpressive the changes were in the climate-change beliefs, risk perceptions, and policy attitudes for the sample as a whole.
Sigh….
The authors state that their “findings provide the strongest evidence to date that public understanding of the scientific consensus is consequential.”
If this is the strongest case that can be made for “97% consensus messaging,” there should no longer be any doubt in the minds of practical people–ones making decisions about how to actually do constructive things in the real world– that it’s time to try something else.
To be against “97% consensus messaging” is not to be against promoting public engagement with scientific consensus on climate change.
It’s to be against wasting time & money & hope on failed social marketing campaigns that are wholly disconnected from the best evidence we have on the sources of public conflict on this issue.
Not surprisingly, there is clearly a lot of confusion about what the study here measured.
The text of the paper doesn’t indicate the actual wording of the outcome measures. The on-line “supporting information,” another paper based on these same data, doesn’t either.
To find out what the “0-100” “belief certainty” items actually say, readers must access that second paper’s on-line supplement.
If one reads that document, one will discover that the authors, contrary to what they represent, didn’t actually measure whether their subjects believed in “human-caused climate change.”
Here’s the item:
Belief in human causation
Subjects were asked the following question; “Assuming climate change IS happening: How much of it do you believe is caused by human activities, natural changes in the environment, or some combination of both?” Response options were given on a continuum, ranging from 0 (I believe that climate change is caused entirely by natural changes in the environment), 50 (I believe that climate change is caused equally by natural changes and human activities) to 100 (I believe that climate change is caused entirely by human activities).
Since the instruction directs subjects to “assum[e] climate change IS happening,” the item necessarily solicited counterfactual responses from all the respondents who didn’t believe in climate change even after being exposed to the “97% consensus” message.
I’m not sure how to characterize what a 4-point change — from “”63.98” to “68.02”– on a 100-point counteractual “belief certainty” scale signifies.
But for sure, it cannot accurately be described as showing what the authors report: that the “97% consensus message” “cause[d] a significant increase in the belief that climate change is human-caused.”
That would be like saying that the message “97% of clerics believe 10^3 angels fit on a head of a pin” had “caused a significant increase in the belief that 10^3 angels to fit on the head of a pin” based on responses from subjects instructed “Assuming there ARE angels, how many would you say fit on the head of a pin . . . .?”
No way that anyone reading the write up of the study would have any idea that this is the sort of measure the studies used — and no way they’d ever find out w/o the 30 mins of searching through multiple texts that I had to engage in to figure out what the study’s items actually say….
Here are the remaining outcome measures:
Belief in Climate Change
Subjects were asked the following question; “How strongly do you believe that climate change is or is not happening?” Response options were given on a continuum, ranging from 0 (I strongly believe that climate change is not happening), 50 (I am unsure whether or not climate change is happening) to 100 (I strongly believe climate change IS happening).
Worry about Climate Change
Subjects were asked the following question; “On a scale from 0 to 100, how worried are you about climate change?” Response options were given on a continuum, ranging from 0 (I am not at all worried), 50 (neutral) to 100 (I am very worried).
Required Action
Subjects were asked the following question; “Do you think people should be doing more or less to reduce climate change?” Response options were given on a continuum, ranging from 0 (Much less), 50 (Same amount) to 100 (Much more).
These measures are very hard to interpret.
I have no idea why the authors didn’t just use–or if they did use, not report–items asking whether respondents believed in human-caused climate change and supported one or another policy to mitigate it. Then the before-and-after effects of the “97% consensus” msg, assessed separately for Republicans and Democrats, could have been readily observed and their practical importance gauged.
But in any case, researchers should always state in a clear and accessible way what their outcome measures actually were, in order to avoid misleading people.
If you think I’m pointing these things out because I don’t believe the U.S. public should know that there is scientific consensus that human activity is causing climate change, you are really really missing the point.
I’m pointing this out because I think that the question of how to dispel the cultural conflict that is preventing the U.S. public from recognizing that there is such consensus should be answered on the basis of valid empirical studies, which then ought to inform science communication.
Anyone who thinks that there is any effective alternative for improving the state of public discourse about climate change in this country is deeply mistaken.