This graphic is a scatterplot of subjects from a nationally representative panel recruited last summer to be subjects in CCP studies.
The y-axis is an eight-point climate-change risk-perception measure. Subjects are “color-coded” consistent with the response they selected.
The x-axis arrays the subjects along a 1-dimensional measure of left-right political outlooks formed by aggregating their responses to a five-point “liberal-conservative” ideology measure and a seven-point party-identification one (α = 0.82).
I can tell you “r = -0.65, p < 0.01,” but I think you’ll get the point better if you can see it! (Here’s a good guideline, actually: don’t credit statistics-derived conclusions that you can’t actually see in the data!)
BTW, you’ll see exactly this same thing — this same pattern — if you ask people “has the temperature of the earth increased in recent decades,” “has human activity caused the temperature of the earth to increase,” “is the arctic ice melting,” “will climate change have x, y, or z bad effect for people,” etc.
Members of the general public have a general affective orientation toward climate change that shapes all of their more particular beliefs about it. That’s what most of the public’s perceptions of the risks and benefits of any technology or form of behavior or public policy consist in — if people actually have perceptions that it even makes sense to try to measure and analyze (they don’t on things they haven’t heard of, like nanotechnology, e.g.).
The affective logic of risk perception is what makes the industrial strength climate-change risk perception measure featured in this graphic so useful. Because ordinary peopole’s answers to pretty much any question that they actually can understand will correlate very very strongly with their responses to this single item, administering the industrial-strength measure is a convenient way to collect data that can be reliably analyzed to assess sources of variance in the public’s perceptions of climate change risks generally.
Indeed, if one asks a question the responses of which don’t correlate with this item, then one is necessarily measuring something other than the generic affective orientation that informs (or just is) “public opinion” on climate change.
Whatever it “literally” says or however a researcher might understand it (or suggest it be understood), an item that doesn’t correlate with other valid indicators of the general risk orientation at issue is not a valid measure of it.
Consequently, any survey item administered to valid general public sample in today’s America that doesn’t generate the sort of partisan division reflected in this Figure is not “valid.” Or in any case, it’s necessarily measuring something different from what a large number of competent researchers, employing in a transparent and straightforward manner a battery of climate-change items that cohere with one another and correspond as one would expect to real-world phenomenon, have been measuring when they report (consistently, persistently) that there is partisan division on climate change risks.
We’ll know that partisan polarization is receding when the correlation between valid measures of political outlooks & like dispositions, on the one hand, and the set of validated indicators of climate change risk, on the other, abates. Or when a researcher collects data using a single validated indicator of a high-degree of discernment like the industrial strength measure and no longer observes the pretty– and hideous– picture displayed in the Figure above.
But if you don’t want to wait for that to happen before declaring that the impasse has been broken– well, then it’s really quite easy to present “survey data” that make it seem like the “public” believes all kinds of things that it doesn’t. Because most people haven’t ever heard of, much less formed views on, specific policy issues, the answers they give to specific questions on them will be noise. So ask a bunch of questions that don’t genuinely mean anything to the respondents and then report the random results on whichever ones seem to reflect the claim you’d like to make!
Bad pollsters do this. Good social scientists don’t.
Oh– I forgot to say!!!!
I got the idea to do this graphic from “loyal listner” @FrankL!
He proposed doing it w/ the y-axis being “hierarch individaulism”/”egalitarian communitarian,” which certainly can be done but only by “collapsing” the two worldview to one. that sort of defeats the point. But it is also pretty clear that one only needs one dimension of cultural variance to get the difference on climate risk perceptions– left-right is good enough (although it defeinitely is less powerful than culture). Hey– would you like to see a 2-dimensional culture alternative to this figure? check it out!
@FrankL should contact me to claim a special CCP prize, since this was a very good idea of something fun to do w/ CCP data.