Measuring “Ordinary Science Intelligence” (Science of Science Communication Course, Session 2)

This semester I’m teaching a course entitled the Science of Science Communication. Posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

In Session 2 (i.e., our 2nd class meeting) we started the topic of “science literacy and public attitudes.” We (more or less) got through “science literacy”; “Public attitudes” will be our focus in Session 3.

As I conceptualize it, this topic is in nature of foundation laying. The aim of the course is to form an understanding of the dynamics of science communication distinctive of a variety of discrete domains. In every one of them, however, effective communication will presumably need to be informed by what people know about science, how they come to know it, and by what value they attach to science’s distinctive way of knowing . So we start with those.

By way of synthesis of the readings and the “live course” (as opposed not to “dead” but “on line”) discussion of them, I will address these points: (1) measuring “ordinary science intelligence”—what & why; (2) “ordinary science intelligence” & civic competence; (3) “ordinary science intelligence” & evolution; and (4) “ordinary science intelligence” as an intrinsic good.

1. “Ordinary science intelligence” (OSI): what is being measured & why?

There are many strategies that could be, and are, used to measure what people know about science and whether their reasoning conforms to scientific modes of attaining knowledge. To my mind at least, “science literacy” seems to conjure up a picture of only one such strategy—more or less an inventory check against a stock of specified items of factual and conceptual information. To avoid permitting terminology to short circuit reflection about what the best measurement strategy is, I am going to talk instead of ways of measuring ordinary science intelligence (“OSI”), which I will use to signify a nonexpert competence in, and facility with, scientific knowledge.

I anticipate that a thoughtful person (like you; why else would you have read even this much of a post on a topic like this?) will find this formulation question-begging. A “nonexpert competence in and facility with scientific knowledge? What do you mean by that?”

Exactly. The question-begging nature of it is another thing I like about OSI. The picture that “science literacy” conjures up not only tends to crowd out consideration of alternative strategies of measurement; it also risks stifling reflection on what it is that we want to measure and why. If we just start off assuming that we are supposed to be taking an inventory, then it seems natural to focus on being sure we start with a complete list of essential facts and methods.  But if we do that without really having formed a clear understanding of what we are measuring and why, then we’ll have no confident basis for evaluating the quality of such a list—because in fact we’ll have no confident basis for believing that any list of essential items can validly measure what we are interested in.

If you are asking “what in the world do you mean by ordinary science intelligence?” then you are in fact putting first things first. Am I–are we–trying to figure out whether someone will engage scientific knowledge in a way that assures the decisions she makes about her personal welfare will be informed by the best available evidence? Or that she’ll be able competently to perform various professional tasks (designing computer software, practicing medicine or law, etc.)? Or maybe to perform civic ones—such as voting in democratic elections? If so, what sort of science intelligence do each of those things really require? What’s the evidence for believeing that? And what sort of evidence can we use to be sure that the disposition being measured really is the one we think is necessary?

If those issues are not first resolved, then constructing and assessing measures of ordinary scientific intelligence will be aimless and unmotivated. They will also, in these circumstances, be vulnerable to entanglement in unspecified normative objects that really ought to be made explicit, so that their merits and their relationship to science intelligence can be reflectively addressed.

2. Ordinary science intelligence and civic competence

Jon Miller has done the most outstanding work in this area, so we used his self-proclaimed “what and why” to help shape our assessment of alternative measures of OSI.  Miller’s interest is civic competence. The “number and importance of public policy issues involving science or technology,” he forecasts, “will increase, and increase markedly” in coming decades as society confronts the “biotechnology revolution,” the “transition from fossil-based energy systems to renewable energy sources,” and the “continuing deterioration of the Earth’s environment.” The “long-term healthy of democracy,” he maintains, thus depends on “the proportion of citizens who are sufficiently scientifically literate to participate in the resolution of” such issues.

We appraised two strategies for measuring OSI with regard to this objective. One was Miller’s “civic science literacy” measure. In the style of an inventory, Miller’s measure consists of two scales, the first consisting largely of key fact items (“Antibiotics kills viruses as well as bacteria [true-false]”; “Doest the Earth go around the Sun, or the Sun go around the Earth?”), and the latter at recognition of signature scientific methods, such as controlled experimentation (he treats the two as separate dimensions, but they are strongly correlated: r = 0.86). Miller’s fact items form the core of the National Science Foundation’s “Science Indicators,” a measure of “science literacy” that is standard among scholars in this field. Based on rough-and-ready cutoffs, Miller estimates that only 12% of U.S. citizens qualify as fully “scientifically literate” and that 63% are “scientifically illiterate”; Europeans do even worse (5%, and 73%, respectively).

The second strategy for measuring OSI evaluates what might be called “scientific habits of mind.” The reason to call it that is that it draws inspiration from John Dewey, who famously opposed a style of science education that consists in the “accumulation of ready-made material,” in the form of canonical facts and standard “physical manipulations.” In its place, he proposed a conception of science education that imparts “a mode of intelligent practice, an habitual disposition of mind” that conforms to science’s distinctive understanding of the “ways by which anything is entitled to be called knowledge.”

There is no standard test (as far as I know!) for measuring this disposition. But there are various “reflective reasoning” measures–“Cognitive Reflection Test” (Frederick), “Numeracy” (Lipkus & Peters), “Actively Open Minded Thinking” (Baron, & Stanovich & West), “Lawson’s Classroom Test of Scientific Reasoning”– that are understood to assess how readily people credit, and how reliably they make active use of, the styles of empirical observation, measurement, and inference (deductive and inductive) that are viewed as scientifically valid.

The measures used for “science literacy” and “scientific habits of mind” strike me as obviously useful for many things. But it’s not obvious to me that either of them is especially suited for assessing civic competence.

Miller’s superb work is focused on internally validating the “civic scientific literacy” measures, not externally validating them. Neither he nor others (as far as I know; anyone who knows otherwise, please speak up!) has collected any data to determine whether his “cut offs” for classifying people as “literate” or “illiterate” predicts how well or poorly they’ll function in any tasks that relate to democratic citizenship, much less that they do so better than more familiar benchmarks of educational attainment (high-school diploma and college degrees, standardized test scores, etc.). Here’s a nice project for someone to carry out, then.

The various “reflective reasoning” measures that one might view as candidates for Dewey’s “habit of mind” conception of OSI have all been thoroughly vetted—but only as predictors of educational aptitude and reasoning quality generally. But they also have not been studied in any systematic ways as markers of civic aptitude.

Indeed, there is at least one study that suggests that neither Miller’s “civic science literacy” measures nor the ones associated with the “scientific habits of mind” conceptioin of OSI predict quality of civic engagement with what is arguably the most important science-informed policy issue now confronting our democracy: climate change. Performed by CCP, the study in question examined science comprehension and climate-change risk perceptions. It found that public conflict over the risks posed by climate change does not abate as science literacy, measured with the “NSF science indicator” items at the core of Miller’s “civic science literacy” index, and reflective reasoning skill, as measured with numeracy, increase. On the contrary, such controversy intensifies: cultural polarization among those with the highest OSI measured in this way is significantly greater than polarization among those with the lowest OSI.

We also discussed one more conception of OSI: call it the “science recognition faculty”.  If they want to live good lives—or even just live—people, including scientists, must accept as known by science many more things then they can possibly comprehend in a meaningful way. It follows that their well-being will thus depend on their capacity to be able to recognize what is known to science independently of being able to verify that, or understand how, science knows what it does. “Science recognition faculty” refers to that capacity.

There are no measures of it, as far as I know. It would be fun to develop some.

But my guess is that it’s unlikely any generalized deficiency in citizens’ science recognition faculty explains political conflicts over climate change, or other policy issues that turn on science, either.  The reason is that most people most of the time recognize without difficultly what is known to science on billions & billions of things of consequence to their life (e.g., “who knows how to make me better if I’m ill?”; “will flying on an airplane get me where I want to go? How about following a GPS?”; “should parents be required to get their children vaccinated against polio?”).

There is, then, something peculiar about the class of conflicts over policy-relevant science that interferes with people’s science recognition faculty. We should figure out what that thing is & protect ourselves—protect our science communication environment—from it.

Or at least that is how it appears to me now, based on my assessment of the best available evidence.

3. Ordinary science intelligence and “belief” in evolution

Perhaps one thinks that what should be measured is a disposition to assent to the best scientific understanding of evolution—i.e., the modern synthesis, which consists in the mechanisms of genetic variance, random mutation, and natural selection. If so, then none of the measures of OSI seems to be getting at the right thing either.

The NSF’s “science indicators” battery includes the question “Human beings, as we know them today, developed from earlier species of animals (true or false).” Typically, around 50% select the correct answer (“true,” for those of you playing along at home).

In 2010, a huge controversy erupted when the NSF decided to remove this question and another—“The universe began with a huge explosion”; only around 40% tend to answer this question correctly—from its science literacy scale.  The decision was derided as a “political” cave-in to the “religious right.”

But in fact, whether to include the “evolution” and “big bang” questions in the NSF scale depends on an important conceptual and normative judgment. One can design an OSI scale to be either an “essential knowledge” quiz or a valid and reliable measurement of some unobservable disposition or aptitude. In the former case, all one cares about is including the right questions and determining how many a respondent answered correctly. But in the latter case, correct responses must be highly correlated across the various items; items the responses to which don’t cohere with one another necessarily aren’t measuring the same thing.  If one wants to test hypotheses about how OSI affects individuals’ decisions—whether as citizens, consumers, parents or whathaveyou—then a scale that is merely a quiz and not a valid and reliable latent-variable measure will be of no use: if responses are randomly correlated, then necessarily the aggregate “score” will be randomly connected to anything else respondents do or say.  It is to avoid this result that scholars like Jon Miller have (very appropriately, and with tremendous skill) focused attention on the psychometric properties of the scales formed by varying combinations of science-knowledge items.

Well, if one is trying to form a valid and reliable measure of OSI, the “evolution” and “big bang” questions just don’t belong in the NSF scale. The NSF keeps track of how the top-tier of test-takers—those who score in the top 25% overall—have done on each question. Those top-scoring test takers have answered correctly 97% of the time when responding to “All radioactivity is man-made (true-false)”; 92% of the time when assessing whether “Electrons are smaller than atoms (true-false)”; 90% of the time when assessing whether “Lasers work by focusing sound waves (true-false)”; and 98% of the time when assessing whether “The center of the Earth is very hot (true-false).” But on “evolution” and “big bang,” those same respondents have selected the correct response only 55% and 62% of the time.

That discrepancy is strong evidence that the latter two questions simply aren’t measuring the same thing as the others. Indeed, scholars who have used the appropriate psychometric tools have concluded that “evolution” and “big bang” are measuring respondents’ religiosity. Moreover, insofar as the respondents who tend to answer the remaining items correctly a very high percentage of the time are highly divided on “evolution” and “big bang,” it can be inferred that OSI, as measured by the remaining items in the NSF scale, just doesn’t predict a disposition to accept the standard scientific accounts of the formation of the universe and the history of life on Earth.

The same is true, apparently, for valid measures of the “habit of mind” conception of OSI.  In general, there is no correlation between “believing” in the best scientific account of evolution and understanding it at even a very basic level. That is, those who say they “believe” in evolution are no more likely than those who say they believe in divine “creation” to know what genetic variance, random mutation, and natural selection mean and how they work within the modern synthesis framework.  How well one scores on a “scientific habit of mind” OSI scale—one that measures one’s disposition to form logical and valid inferences on the basis of observation and measurement—does predict both one’s understanding of the modern synthesis and one’s aptitude for being able to learn it when it is presented in a science course.  But even when they use their highly developed “scientific habits of mind” disposition to gain a correct comprehension of evolution, individuals who commence such a course “believing” in divine creation don’t “change their mind” or abandon their belief.

It is commonplace to cite the relatively high percentage of Americans who say they believe in divine creation as evidence of “low” science literacy or poor science education in the U.S. But ironically, this criticism reflects a poor scientific understanding of the relationship between various measures of science comprehension and beliefs in evolution.

4. Ordinary science intelligence as an intrinsic good

Does all this mean OSI—or at least the “science literacy” and “habits of mind” strategies for measuring it—are unimportant? It could only conceivably mean that if one thought that the sole point of promoting OSI was to make citizens form a particular view on issues like climate change or to make them assent to and not merely comprehend scientific propositions that offend their religious convictions.

To me, it is inconceivable that the value of promoting the capacity to comprehend and participate in scientific knowledge and thought depends on the contribution doing so makes to those goals. It is far from inconceivable that enhancing the public’s OSI (as defensibly defined and appropriately measured) would improve individual and collective decisionmaking.  But I don’t accept that OSI must attain that or any other goal to be worthy of being promoted. It is intrinsically valuable. Its propogation in citizens of a liberal society is self-justifying.

This is the position, I think, that actually motivated Dewey to articulate his “habits of mind” conception of OSI.  True, he dramatically asserted that the “future of our civilization depends upon the widening spread and deepening hold of the scientific habit of mind,” a claim that could (particularly in light of Dewey’s admitted attention to the role of liberal education in democracy) reasonably be taken as evidence that he believed this disposition to be instrumental to civic competence.

But there’s a better reading, I think. “Scientific method,” Dewey wrote, “is not just a method which it has been found profitable to pursue in this or that abstruse subject for purely technical reasons.”

It represents the only method of thinking that has proved fruitful in any subject—that is what we mean when we call it scientific. It is not a peculiar development of thinking for highly specialized ends; it is thinking so far as thought has become conscious of its proper ends and of the equipment indispensable for success in their pursuit.

The advent of science’s way of knowing marks the perfection of a human capacity of singular value.  The habits of mind integral to science enable a person “[a]ctively to participate in the making of knowledge,” which Dewey idenfies as “the highest prerogative of man and the only warrant of his freedom.”

What in Dewey’s view makes the propagation of scientific habits of mind essential to the “future of our civilization,” then, is that only a life informed by this disposition counts as one “governed by intelligence.”  “Mankind,” he writes “so far has been ruled by things and by words, not by thought, for till the last few moments of history, humanity has not been in possession of the conditions of secure and effective thinking.” “And if this consummation” of human rationality and freedom is to be “achieved, the transformation must occur through education, by bringing home to men’s habitual inclination and attitude the significance of genuine knowledge and the full import of the conditions requisite for its attainment.”

To believe that we must learn to measure the attainment of scientific habits of mind in order to perfect our ability to propagate them honors Dewey’s inspiring vision.  To insist that the value of what we would then be measuring depends on the contribution that cultivating scientific habits of mind would make to resolution of particular political disputes, or to the erasure of every last sentimental vestige of the ways of knowing that science has replaced, does not.

Reading list.

Leave a Comment

error: