As the 12 billion readers of this blog (we are down 2 billion, apparently because we’ve been blocked in the Netherlands Antilles & Macao. . .) know, I have been working on & reporting various analyses involving an “ordinary science intelligence” (OSI) science-comprehension measure.
Indeed, one post describing how it relates to political outlooks triggered some really weird events—more than once in fact!
But in any case, I’ve now assembled a set of analyses and put them into one document, which you can download if you like here.
The document briefly describes the history of the scale, which for now I’m calling OSI_2.0 to signify that it is the successor of the science comprehension instrument (henceforward “OSI_1.0”) featured in “The polarizing impact of science literacy and numeracy on perceived climate change risks,” Nature Climate Change 2, 732-735 (2012).
Like OSI_1.0, _2.0 is a synthesis of existing science literacy and critical reasoning scales. But as explained in the technical notes, OSI_2.0 combines items that were drawn from a wider array of sources and selected on the basis of a more systematic assessment of their contribution to the scale’s performance.
The goal of OSI_2.0 is to assess the capacity of individuals to recognize and give proper effect to valid scientific evidence relevant to their “ordinary” or everyday decisions—whether as consumers or business owners, parents or citizens.
A measure of that sort of facility with science—rather than, say, the one a trained scientist or even a college or high school science student has—best fits the mission of OSI_2.0 as to enable “empirical investigation of how individual differences in science comprehension contribute to variance in public perceptions of risk and like facts.”
Here are some of the things you, as a regular reader of this blog who has already been exposed to one or another feature of OSI_2.0, can learn from the document:
1. The items and their derivation. The current scale consists of 18 items drawn from the NSF Indicators, the Pew Science & Technology battery, the Lipkus/Peters Numeracy scale, and Frederick’s Cognitive Reflection Test. My next goal is to create a short-form version that performs comparably well; 8 items would be great & even 10 much better. . . . But in any case, the current 18 and their sources are specifically identified.
2. The psychometric properties of the scale. The covariance structure, including dimensionality and reliability, are set forth, of course. But the cool thing here, in my view, is the grounding of the scale in Item Reponse Theory.
There are lots of valid ways to combine or aggregate individual items, conceived of as observable or manifest “indicators,” into a scale conceived of as measuring some unobserved or latent disposition or trait.
The distinctive thing about IRT is the emphasis it puts on assessing how each item contributes to the scale’s measurement precision along the range of the disposition treated as a continuous variable. This is a nice property, in particular, when one is designing some sort of knowledge or aptitude assessment instrument, where one would like to be confident not only that one is reliably relating variance in the disposition as a whole to some outcome variable of interest but also that one reliably assessing individual differences in levels of the disposition within the range of interest (usually the entire range).
IRT is a great scale development tool because it helps to inform decisions not only about whether items are valid indicators but how much relative value they are contributing.
One thing you can see with IRT is that, as it is measured by the OSI_2.0 scale at least, the sort of “basic fact” items (“Electrons are smaller than atoms—true or false?”; “Does the Earth go around the Sun, or does the Sun go around the Earth?”) are contributing mainly to measurement discrimination at low levels of “ordinary science intelligence.”
One gets credit for those, certainly, but not as much as for correctly responding to the sorts of quantitative and critical reasoning items that come from the Numeracy scale and the Cognitive Reflection Test.
That’s as it should be in my view: a person who has the capacity to recognize and make use of valid science will no doubt have used it to acquire knowledge of a variety of basic propositions relating to the physical and biological sciences; but what we care about—what we want to certify and measure—is her ability to enlarge that stock of knowledge and use it appropriately to advance her ends.
3. External validity. The technical notes report analyses that show that OSI_2.0 is, unsurprisingly, correlated with education and with open-mindedness (as measured by Baron’s Actively Open-minded Thinking scale) but doesn’t reduce to them and in fact more accurately predicts performance on tasks that demand or display a distinctive science-comprehension capacity (like covariance detection).
4. Other covariates. There are correlations with race and gender but they are actually pretty small. None with political outlooks (but note: I didn’t even check for a correlation with belonging to the Tea Party—I’ve learned my lesson! Actually, I can probably be coaxed into checking & reporting this; what “identity with the Tea Party” measures is a pretty interesting question! But I’ll do it a post in the middle of the night & written in pig latin to be sure to avoid a repeat of the sad spectacle that occurred the last time.).
5. The science-comprehension invalidity of “belief in” questions relating to evolution and global warming. The notes illustrate the analytical/practical utility of OSI_2.0 by showing how the scale can be used to assess whether variance in response to standard survey items on evolution and global warming reflect differences in science comprehension. They aren’t!
That, of course, is the conclusion of my new paper Climate Science Communication and the Measurement Problem, which uses OSI_2.0 to measure science comprehension.
But the data in the notes present a compact rehearsal of the findings discussed there and also add additional factor analyses, which reinforce the conclusion that “belief in” evolution and “belief in” global warming items are in fact indicators of latent “group identity” variables that feature religiosity and right-left political outlooks, respectively, and not indicators of the latent “ordinary science intelligence” capacity measured by the OSI_2.0 scale.
The analyses were informed by interesting feedback on did on a post on factor analysis and scale dimensionality—maybe the commentators on that one will benefit me with additional feedback!
The published version of the OSI_2.0 working paper will appear in Journal of Risk Research. Keep your eye’s peeled for it at the newstand–no doubt that issue will sell out right quick!