Replicate “Climate-Science Communication Measurement Problem”? No sweat (despite hottest yr on record), thanks to Pew Research Center!

One of the great things about Pew Research Center is that it posts all (or nearly all!) the data from its public opinion studies.  That makes it possible for curious & reflective people to do their own analyses and augment the insight contained in Pew’s own research reports.

I’ve been playing around with the “public” portion of the “public vs. scientists” study, which was issued last January (Pew 2015). Actually Pew hasn’t released the “scientist” (or more accurately, AAAS membership) portion of the data. I hope they do!

But one thing I thought it would be interesting to do for now would be to see if I could replicate the essential finding from “The Climate Science Communication Measurement Problem” (2015).

In that paper, I presented data suggesting, first, that neither “belief” in evolution nor “belief” in human-caused climate change were measures of general science literacy.  Rather both were better understood as measures of forms of “cultural identity” indicated, respectively, by items relating to religiosity and items relating to left-right political outlooks.

Second, and more importantly, I presented data suggesting hat there is no relationship between “belief” in human-caused climate change & climate science comprehension in particular. On the contrary, the higher individuals scored on a valid climate science comprehension measure (one specifically designed to avoid the confound between identity and knowledge that confounds most “climate science literacy” measures), the more polarized the respondents were on “belief” in AGW–which, again, is best understood as simply an indicator of “who one is,” culturally speaking.

Well, it turns out one can see the same patterns, very clearly, in the Pew data.

Patterned on the NSF Indicators “basic facts” science literacy test (indeed, “lasers” is an NSF item), the Pew battery consists of six items:

As I’ve explained before, I’m not a huge fan of the “basic facts” approach to measuring public science comprehension. In my view, items like these aren’t well-suited for measuring what a public science comprehension assessment ought to be measuring: a basic capacity to recognize and give proper effect to valid scientific evidence relevant to the things that ordinary people do in their ordinary lives as consumers, workforce members, and citizens.

One would expect a person with that capacity to have become familiar with certain basic scientific insights (earth goes round sun, etc.) certainly.  But certifying that she has stocked her “basic fact” inventory with any particular set of such propositions doesn’t give us much reason to believe that she possesses the reasoning proficiencies & dispositions needed to augment her store of knowledge and to appropriately use what she learns in her everyday life.

For that, I believe, a public science comprehension battery needs at least a modest complement of scientific-thinking measures, ones that attest to a respondent’s ability to tell the difference between valid and invalid forms of evidence and to draw sound inferences from the former.  The “Ordinary Science Intelligence” battery, used in the Measurement Problem paper, includes “cognitive reflection” and “numeracy”modules for this purpose.

Indeed, Pew has presented a research report on a more fulsome science comprehension battery that might be better in this regard, but it hasn’t released the underlying data for that one.

But anyway, the new items that Pew included in its battery are more current & subtle than the familiar Indicator items, & the six-member Pew group form a reasonably reliable (α = 0.67), one dimensional scale– suggesting they are indeed measuring some sort of science-related apptitude.

But the fun stuff starts when one examines how the resulting Pew science literacy scale relates to items on evolution, climate change, political outlooks, and religiosity.

For evolution, Pew used it’s two-part question, which first asks whether the respondent believes (1) “Humans and other living things have evolved over time” or (2) “Humans and other living things have existed in their present form since the beginning of time.” 

Subjects who pick (1) then are asked whether (3) “Humans and other living things have evolved due to natural processes such as natural selection” or (4) “A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today.”

Basically, subjects who select (2) are “new earth creationists.” Subjects who select (4) are generally regarded as believing in “theistic evolution.”  Intelligent design isn’t the only variant of “theistic evolution,” but it is certainly one of the accounts that fit this account.

Only subjects who select (3)– “humans and other living things have evolved due to natural processes such as natural selection” — are the only ones furnishing the response that reflects science’s account of the natural history of humans. 

So I created a variable, “evolution_c,” that reflects this answer, which was in fact selected by only 35% of the subjects in Pew’s U.S. general public sample.

On climate change, Pew assessed (using two items that tested for item order/structure effects that turned out not to matter) whether subjects believed (1) “the earth is getting warmer mostly because of natural patterns in the earth’s environment,” (2) “the earth is getting warmer mostly because of human activity such as burning fossil fuels,” or (3) “there is no solid evidence that the earth is getting warmer.”

About 50% of the respondents selected (2).  I created a variable, gw_c, to reflect whether respondents selected that response or one of the other two.

For political orientations, I combined the subjects responses to a 5-point liberal-conservative ideology item and their responses to a 5-point partisan self-identification item (1 “Democrat”; 2 “Independent leans Democrat”; 3 “Independent”; 4 “Independent leans Republican”; and 5 “Republican”).  The composite scale had modest reliability (α = 0.61).

For religiosity, I combined two items.  One was a standard Pew item on church attendance. The other was a dummy variable, “nonrelig,” scored “1” for subjects who said they were either “atheists,” “agnostics” or “nothing in particular” in response to a religious-denomination item (α = 0.66).

But the very first thing I did was toss all of these items — the 6 “science literacy” ones, belief in evolution (evolution_c), belief in human-caused climate change (gw_c), ideology, partisan self-identification, church attendance, and nonreligiosity–into a factor analysis (one based on a polychoric covariance matrix, which is appropriate for mixed dichotomous and multi-response likert items).

Not surprisingly, the covariance structure was best accounted for by three latent factors: one for science literacy, one for political orientations, and one for religiosity.

But the most important result was that neither belief in evolution nor belief in human-caused climate change loaded on the “science literacy” factor.  Instead they loaded on the religiosity and right-left political orientation factors, respectively.

This analysis, which replicated results from a paper dedicated solely to examinging the properties of the Ordinary Science Intelligence test, supports the inference that belief in evolution and belief in climate Warning: Click only if psychologically prepared to see shocking cultural bias in “belief in evolution” as science literacy assessment item!change

are not indicators of “science comprehension” but rather indicators of cultural identity, as manifested respectively by political outlooks and religiosity.

To test this inference further, I used “differential item function” or “DIF” analysis (Osterlind & Everson, 2009).

Based on item response theory, DIF examines whether a test item is “culturally biased”–not in an animus sense but a measurement one: the question is whether the responses to the item measure the “same” latent proficiency (here, science literacy) in diverse groups.  If it doesn’t– if there is a difference in the probability that members of the two groups who have equivalent science literacy scores will answer it “correctly”–then administering that question to members of both will result in a biased measurement of their respective levels of that proficiency.

In Measurement Problem, I used DIF analysis to show that belief in evolution is “biased” against individuals who are high in religioisity. 

Using the Pew data (regression models here), one can see the same bias:

The latter but not the former are likely to indicate acceptance of science’s account of the natural history of humans as their science literacy scores increase. This isn’t so for other items in the Pew science literacy battery (which here is scored used using an item response theory model; the mean is 0, and units are standard deviations).

The obvious conclusion is that the evolution item isn’t measuring the same thing in subjects who are relatively religious and nonreligious as are the other items in the Pew science literacy battery.

In Measurement Problem, I also used DIF to show that belief in climate change is a biased (and hence invalid) measure of climate science literacy.  That analysis, though, assessed responses to a “belief in Warning: Graphic demonstration of cultural bias in standardized assessment item. Click only if 21 yrs or older or accompanied by responsible adult or medical professional.climate change” item (one identical to Pew’s) in relation to scores on a general climate-science literacy assessment, the “Ordinary Climate Science Intelligence” (OCSI) assesssment.  Pew’s scientist-AAAS study didn’t have a climate-science literacy battery.

Its general science literacy battery, however, did have one climate-science item, a question of theirs that in fact I had included in OCSI: “What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it Carbon dioxide, Hydrogen, Helium, or Radon?” (CO2).

Below are the DIF item profiles for CO2 and gw_c (regression models here). Regardless of their political outlooks, subjects become more likely to get CO2 correctly as their science literacy score increases–that makes perfect sense!

But as their science literacy score increases, individuals of diverse political outlooks don’t converge on “belief in human caused climate change”; they become more polarized.  That question is measuring who the subjects are, not what they know about about climate science.

So there you go!

I probably will tinker a bit more with these data and will tell you if I find anything else of note.

But in the meantime, I recommend you do the same! The data are out there & free, thanks to Pew.  So reciprocate Pew’s contribution to knowledge by analyzing them & reporting what you find out!

References

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).

Osterlind, S. J., & Everson, H. T. (2009). Differential item functioning. Thousand Oaks, CA: Sage.

Pew Research Center (2015). Public and Scientists’ Views on Science and Society.

Leave a Comment

error: