It is almost universally assumed that political polarization over societal risks like climate change originate in different levels of trust in scientists: left-leaning people believe in human-caused climate change, it is said, because they have a greater degree of confidence in scientists; so-called “conservative Republicans,” in contrast, are said distrust of science and scientists and thus are predisposed to climate skepticism.
But is this right? Or are we looking at another form of the dreaded WEKS disease?
Well, here’s a simple test based on GSS data.
Using the 2010 & 2016 datasets (the only years in which the survey included the climate-risk outcome variable), I cobbled together a decent “trust in science” scale:
scibnfts5: “People have frequently noted that scientific research has produced benefits and harmful results. Would you say that, on balance, the benefits of scientific research have outweighed the harmful results, or have the harmful results of scientific research been greater than its benefits?” [5 pt: strongly in favor beneficial . . .strongly in favor of harmful results.”)
consci: “As far as the people running [the science community] are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them,”
scientgo: “Scientific researchers are dedicated people who work for the good of humanity.” [4 points: strongly agree . . . strongly disagree)
scienthe: “Scientists are helping to solve challenging problems.” [4 points: strongly agree . . . strongly disagree)
nextgen: “Because of science and technology, there will be more opportunities for the next generation” [4 points: strongly agree . . . strongly disagree”]
advfont. “Even if it brings no immediate benefits, scientific research that advances the frontiers of knowledge is necessary and should be supported by the federal government.” [4 points: strongly agree . . . strongly disagree”]
scientbe. “Most scientists want to work on things that will make life better for the average person.” [4 points: strongly agree . . . strongly disagree”]
These items formed a single factor and had a Cronbach’s α score of 0.72. Not bad. I also reverse coded as necessary so that for every item a higher score would denote more rather than less trust of science.
Surprisingly, the GSS has never had a particularly good set of climate-change “belief” and risk perception items. Nevertheless, they have sometimes fielded this question:
TEMPGEN: “In general, do you think that a rise in the world’s temperature caused by the `greenhouse effect’, is exptremely dangers for the evironment . . . not dangerous at all for the environment?” [5 points: “exptremely dangers for the evironment . . . not dangerous at all for the environment?”]
I don’t love this item but it is a cousin of the revered Industrial Strength Risk Perception Measure, so I decided I’d give it a whirl.
I then did some regressions (after of course, eyeballing the raw data).
In the first model, I regressed a reverse-coded TEMPGEN on the science-trust scale and “left_right,” a composite political outlook scale formed by aggregating the study participants’ self- (α= 0.66 ). As expected, higher scores on the science-trust scale predicted responses of “very dangerous” and “extremely dangers,” while left_right predicted responses of “not very dangerous” and “not dangerous at all.”
If one stops there, the result is an affirmation of the common wisdom. Both political outlooks and trust in science have the signs one would expect, and if one were to add their coefficients, one could make claims about how much more likely relatively conservative respondents would be to see greater risk if only they could be made to trust science more.
But this form of analysis is incomplete. In particular, it assumes that the contribution trust in science and left_right make to perceptions of the danger of climate change are (once their covariance is partialed out) independent and linear and hence additive.
But why assume that trust in science has the same effect regardless of respondents’ ideologies? After all, we know that science comprehension’s impact on perceived climate-change risks varies in relation to ideology, magnifying polarization. Shouldn’t we at least check to see if there is a comparable interaction between political outlooks and trust?
So I created a cross-product interaction term and added it to form another regression model. And sure enough, there was an interaction, one predicting in particular that we ought to expect even more partisan polarization as right- and left-leaning individuals’ scores on the trust-in-science scale increased.
Here’s what the interaction looks like:
Geez! Higher trust promotes greater risk concern for left-leaning respondents but has essentially no effect whatsoever on right-leaning ones.
What to say?…
Well one possibility that occurs to me is based on biased perceptions of scientific consensus. Experimental data suggest that ordinary persons of diverse outlooks are more likely to notice, assign significance to, and recall instances in which a scientist took the position consistent with their cultural group’s than ones in which a scientist took the opposing position. As a result, people end up with mental inventories of expert opinion skewed toward the position that predominates in their group. If that’s how they perceive the weight of expert opinion, why would they distrust scientists?
But I dunno. This is just post hoc speculation.
Tell me what you think the answer is – and better still, how one could design an experiment to test your favored conjecture against whatever you think the second most likely answer is.