Weekend update: Pew’s disappointing use of invalid survey methods on GM food risk perceptions

So here’s a follow-up on “grading of Pew’s public attitudes toward science report”–& why I awarded it a “C-” in promoting informed public discussion, notwithistanding its earning an “A” in scholarly content (the data separated from the Center’s commentary, particularly the press materials it issued).

This follow-up says a bit more about the unscholarly way Pew handled public opinion on GM food risks.

Some background points:

1. It’s really easy for people to form misimpressions about “public opinion.”

Why? Because, for one thing, figuring out what “people” (who actually usually can’t usefully be analyzed w/o being broken down into groups) “think” about anything is not anything anyone can directly observe; like lots of other complicated processes, it is something we have to try to draw inferences about on the basis of things that we can observe but that are only correlates of, or proxies for, it.

For another, none of us is in the position via our personal, casual observations to collect a valid sample of the sorts of observable correlates or proxies.  We have very limited exposure, reflecting the partiality of our own social networks and experiences, to the ways in which “the public” reveals what it thinks.  And it is in fact a feature of human psychology to overgeneralize from imperfect samples like that & make mistakes as a result.

2. One of the things many many many many people are mistaken about as a result of these difficulties is “public opinion” on GM food risks.  The media is filled with accounts of how anxious people are about GM foods.  That’s just not so: people consume them like mad (70% to 80% of the food for sale in a US supermarket contains GMOs).

Social science researchers know this & have been engaged in really interesting investigations to explain why this is so, since clearly things could be otherwise: there are environmental risks that irrationally scare the shit out of members of the US public generally (e.g., nuclear waste disposal). Moreover, European public opinion is politically polarized on GM foods, much the way the US is on, say, climate change.  So why not here (Peters et al. 2007; Finucane, M.L. & Holup 2005; Gaskell, Bauer, Durant & Allum 1999)? Fascinating puzzle!

That isn’t to say there isn’t controversy about GM foods in American society. There is: in some sectors of science; in politics, where efforts to regulate GM foods are advanced with persistence by interest groups (organic food companies, small farmers, entrepreneurial environmental groups) & opposed with massive investments by agribusiness; and in very specialized forms of public discourse, mainly on the internet.

Indeed, the misimpression that GM foods are a matter of general public concern exists mainly among people who inhabit these domains, & is fueled both by the vulnerability of those inside them to generalize inappropriately from their own limited experience and by the echo-chamber quality of these enclaves of thought.

3.  The point of empirical public opinion research is to correct the predictable mistakes that arise from dynamics like these.

One way empirical researchers have to tried to do this in the case of GM foods is by showing that in fact members of the public have no idea what GM foods are.

They fail miserably if you measure their knowledge of GMOs.

They also say all kinds of silly things about GM foods that clearly aren’t true: e.g., that they scrupulously avoid eating them and that they believe GM foods are already heavily regulated and subject to labeling requirements (e.g., Hallman et al. 2013).

That people are answering questions in a manner that doesn’t correspond to reality shows that the survey questions themselves are invalid. They are not measuring what people in the world think—b/c people in the world (i.e., United States) aren’t thinking anything at all about GM foods; they are just eating them.

The only things the questions are measuring—the only thing they are modeling—is how people react to being asked questions they don’t understand.

This was a major theme, in fact, of the National Academy of Science’s recent conference on science communication & GMOs.  So was the need to try to get this information across to the public, to correct t the pervasive misimpression that GM foods are in fact a source of public division in the U.S.

So what did Pew do?  It issued survey items that serious social science researchers know are invalid and promoted the results in exactly the way that fosters the misimpression those researchers are trying to correct!

Pew asked members of their general public sample, “Do you think it is generally safe or unsafe to eat genetically modified foods?”

Thirty-seven percent answered “generally safe,” 57% “generally UNsafe” and 6% “don’t know/Refused.”

Eighty-eight percent of the “scientist” (AAAS member) sample, in contrast, answered “generally safe.”

Pew trumpeted this 51% difference, making it the major attention-grabber in their media promotional materials and Report Commentary.

This is really not good at all.

As an elite scholarly research operation, Pew knows that this survey item did not measure any sort of opinion that exists in the U.S. public. Pew researchers know that members of the public don’t know anything about GM foods.  They know the behavior of members of the public in purchasing and consuming tons of food containing GM foods proves their is no meaningful level of concern about the risks of GM foods!

Indeed, Pew had to know that the responses to their own survey reflected simple confusion on the part of their survey respondents.

Pew couldn’t possibly have failed to recognize that because (as eagle-eye blog reader @MW pointed out) another question Pew posed to the respondents was whether “When you are food shopping, how often, if ever, do you LOOK TO SEE if the products are genetically modified?”

Fifty-percent answered “always or sometimes.”

This is patently ridiculous, of course, since there is nothing to see in the labels of foods in US grocery stores that indicates whether they contain GMOs.

This is the sort of question—like the ones that show that the US public believes that there already is GM food labeling in the US, and is generally satisfied with “existing” information on them (Hallman et al. 2013)—that researchers use to show that survey items on GM food risks are not valid: these items are eliciting confusion from people who have no idea what they are being asked.

And here’s another thing: immediately before asking these two questions, Pew used an introductory prompt that stated “Scientists can change the genes in some food crops and farm animals to make them grow faster or bigger and be more resistant to bugs, weeds, and disease.”

That’s a statement that it is quite reasonable to imagine will generate a sense of fear or anxiety in survey takers.  So no surprise that if one then asks them, “Oh are you worried about this,” and “do you (wisely, of course) check to see if this weird scary thing has been done to your food?!,” people answer “oh, yes!”

Even more disturbing, the questeion immediately before that was whether people are worried about pesticides – a topic that will predictably raise risk apprehension level generally and bias upward respondents’ perceptions of other putative risk sources in subsequent questions (e.g., Han, Lerner & Keltner 2007).

Sigh.

Bad pollsters use invalid questions on matters of public policy all the time.

They ask members of the American public whether they “support” or “oppose” this or that policy or law that it is clear most Americans have never heard of.  They then report the responses in a manner that implies that the public actually has a view on these things.

Half the respondents in a general population survey won’t know— or even have good enough luck to guess– the answer to the multiple-choice question “how long is the term of a U.S. Senator?” Only 1/3 of them can name their congressional Representative, and only 1/4 of whom can name both of their Senators.

Are we really supposed to take seriously, then, a poll that tells us 55% of them have an opinion on the “NSA’s telephonic metadata collection policy”?!

Good social science researchers are highly critical of this sort of sensationalist misrepresentation of what is really going on in public discourse (Krosnick, Malhorta & Mittal 2014; Bishop 2005; Shuman 1998).

Pew has been appropriately critical of the use of invalid survey items in the past too, particularly when the practice is resorted by policy advocates, who routinely construct survey items to create the opinion that there is “majority support” for issues people have never heard of (Kohut 2010).

So why, then, would Pew engage in what certainly looks like exactly this sort of practice here?

Some very sensible correspondents on Twitter (a dreadful forum for meaningful conversation) wondered whether an item like Pew’s, while admittedly invalid as a measure of what members of the public are actually thinking now, might be a good sort of “simulation” of how they might respond if they learned more.

That’s a reasonable question, for sure.

But I think the answer is no

If a major segment of the US public were to become aware of GM foods—what they are, what the evidence is on their risks and benefits—the conditions in which they did so would be rich with informational cues and influences (the identity, e.g., of the messengers, what their peers are saying etc) of the sort that we know have a huge impact on formation of risk perceptions.

It’s just silly to think that the experience of getting a telephone call from a faceless pollster asking strange questions about matters one has never considered before can be treated as giving us insight into the reactions such conditions would be likely to produce.

We could try to experimentally simulate what those conditions might be like; indeed, we could try to simulate alternative versions of them, and try to anticipate what effect they might have on opinion formation.

But the idea that the experience of a respondent in a simple opinion survey like Pew’s is a valid model of that process is absurd.  Indeed, that’s one of the things that experimental simulations of how people react to new technologies have shown us.

It’s also what real-world experience teaches: just ask the interest groups who sponsored defeated referenda in states they targeted after polls showed 80% support for labeling.

But in any case, if that’s what Pew thought it was doing—simulating how people would think about GM food risks if they were to start thinking of them—they should have said so.  Then readers of their report would not have formed a misimpression about what the question was measuring.

Instead, Pew said only that they had done a survey that documents a “gap” between what members of the public think about GM food risks and scientists do.

Their survey items on GM food risks do no such thing.

And that they would claim otherwise, and reinforce rather than correct public misimpressions, is hugely disappointing.

Refs

Bishop, G.F. The illusion of public opinion : fact and artifact in American public opinion polls (Rowman & Littlefield, Lanham, MD, 2005).

Finucane, M.L. & Holup, J.L. Psychosocial and cultural factors affecting the perceived risk of genetically modified food: an overview of the literature. Soc Sci Med 60, 1603-1612 (2005).

Gaskell, G., Bauer, M.W., Durant, J. & Allum, N.C. Worlds apart? The reception of genetically modified foods in Europe and the US. Science 285, 384-387 (1999).

Hallman, W., Cuite, C. & Morin, X. Public Perceptions of Labeling Genetically Modified Foods. Rutgers School of Environ. Sci. Working Paper 2013-2001, available at http://humeco.rutgers.edu/documents_PDF/news/GMlabelingperceptions.pdf.

Han, S., Lerner, J.S. & Keltner, D. Feelings and Consumer Decision Making: The Appraisal-Tendency Framework. J Consum Psychol 17, 158-168 (2007).

Kohut, A. Views on climate change: What the polls show. N.Y. Times A22 (June 13, 2010), available at http://www.nytimes.com/2010/06/14/opinion/l14climate.html?_r=0

Lerner, J.S., Han, S. & Keltner, D. Feelings and Consumer Decision Making: Extending the Appraisal-Tendency Framework. J Consum Psychol 17, 181-187 (2007).

Krosnick, J.A., Malhotra, N. & Mittal, U. Public Misunderstanding of Political Facts: How Question Wording Affected Estimates of Partisan Differences in Birtherism. Public Opin Quart 78, 147-165 (2014).

Peters, H.P., Lang, J.T., Sawicka, M. & Hallman, W.K. Culture and technological innovation: Impact of institutional trust and appreciation of nature on attitudes towards food biotechnology in the USA and Germany. Int J Public Opin R 19, 191-220 (2007).

Shuman, H. Interpreting the Poll Results Better. Public Perspective 1, 87-88 (1998).

Leave a Comment

error: