Who fears what & why? Trust but verify!

Probably Patrick & a friend were involved in a discussion about whether those who are (aren’t) concerned about climate change are the “same” people who are (aren’t) concerned about nuclear power and GM food risks.

A discussion/argument like that is pretty interesting, if you think about it.

We all know that risk perceptions tend to come in intriguing packages — intriguing b/c the correlations between the factual understandings they comprise are more plausibly explained by the common cultural meanings they express than by any empirical premises they share.

E.g., imagine you were to say to me, “Gee, I wonder whether crime rates can be expected to up or instead to go down if one of the 40 or so states that now automatically issue a permit to carry a concealed handgun to any adult w/o a criminal record or a history of mental illness enacted a ban on venturing out of the house with a loaded pistol tucked unobtrusively in one’s coat pocket?”

If I answered, “Well, I’m not sure, but I do have some valid evidence that human activity has caused the temperature of the earth to increase in recent decades–surely you can deduce the answer from that,” you’d think either I was being facetious or I was an idiot (maybe both; they can occur together–I don’t know whether they are correlated).

But if I were to run up to you all excited & say, “hey, look–I found a correlation between believing that the temperature of the earth has not increased as a result of human causes in recent decades and believing that banning concealed handguns would cause crime to increase,” you’d probably say, “So? Only a truly clueless dolt wouldn’t have expected that.”

You’d say that — & be right, as the inset graphic, which correlates responses to the “industrial strength risk perception measure” as applied to “private ownership of guns” and “global warming,” illustrates — b/c “everyone knows” (they can just see) that our society is densely populated with “types of people” who form packages of related empirical beliefs in which the reality & consequences of human-caused climate change are inversely correlated with beliefs about the dangers posed by private ownership of handguns in the U.S.

The “types” are ones who share certain kinds of commitments relating to how society and other types of collective enterprises should be organized.  We can all see our social world is like that but because we can’t directly observe people’s “types” (they & the dispositions they impart are “latent variables”), we come up with observable indicators, like cultural worldviews” &/or “political ideologies” & various demographic characteristics, that we can combine into valid scales or classifying instruments of one sort or another. We can use those to satisfy our curiosity about the nature of the types & the dynamics that generate the puzzling pattern of empirical beliefs that they form on certain types of disputed risk issues.

We can all readily think of indicators of the sorts of “types” whose perceptions of the risks of climate change & guns are likely to be highly convergent, e.g.

Those risks are “politicized” in right-left terms, so we could use “right-left” political outlooks to specify the “types” & do a pretty decent job (a walk or bunt single; hey, it’s spring training!).

We could do even better (stand-up double) if we used the cultural cognition “worldview” scales — & if we tossed in race & gender as additional indicators (say, by including appropriate cross-product interaction variables in a regression model), we’d be hitting a homerun!

But here’s another interesting thing that Patrick’s query—and the argument I’m guessing was the motivation for his posing it: our perceptions of the packages and the types aren’t always shared, or even when widely held, aren’t always right.

Not that surprising, actually, when you remember that the types can’t be directly observed. It helps too to realize that the source of our apprehension of these matters—the packages, the types—is based on a form of sampling rife with potential biases.  The “data,” as it were, that inform our perceptions are always skewed by the partiality of our social interactions, which reflect our propensity to engage with those who share our outlooks and interests.

That sort of “selection bias” is a perfectly normal thing; only a lunatic would try to “stratify” his her social interaction to assure “representativeness” in his or her personal observations of how risk perceptions are distributed across types of persons (I suppose one could try applying population weights to all of one’s interactions, although that would be time consuming & a nuisance).

But it does mean that we’ll inevitably disagree with our associates now & again—and even when we don’t disagree, all be wrong—about who fears what & why.

E.g., many people think that concern over childhood immunizations is part of one or another risk-perception package held by one or another recognizable “type” of person.

Some picture them as  part of the package characteristic of the global-warming concerned, nuclear-power fearing tribe of “egalitarians, [who] oppose . . . big corporations and their products.”

When others grope at this particular elephant, they report feeling the “the conservative don’t-tread-on-me crowd that distrusts all government recommendations”—i.e., the same “type” that is skeptical of climate-change and nuclear-power risks.

Well, one or the other could have been right, but it turns out that they are both just plain wrong.

As the CCP report on Vaccine Risk Perceptions and Ad Hoc Risk Communication documents, all the recognizable “types”—whether defined in political or cultural terms—support universal childhood immunization.

The perception that vaccines cause autism is not part of the same risk-perception package as global warming: climate-change skeptics and climate-change believers both overwhelmingly perceive the risks of childhood immunizations to be low and the benefits of them to be high.

The misunderstandings about who is afraid of vaccines and why reflects selection bias in an echo chamber, reinforced by the reciprocal recriminations and expressions of contempt that pervade climate change discourse and that fill members of each with the motivation to see those on the other as harboring all sorts of noxious beliefs and being the source of myriad social ills. (Is this a new thing? Nope.)

So … back to Patrick’s question!

It’s not news—it’s a staple of the public study of risk perceptions and the cultural theory of risk in particular—that perceptions of climate-change and nuclear-power risks are part of a common “package” and are associated with distinctive types.

So my guess is that either Patrick or his friend (the one he was having an argument with; nothing inherently unfriendly about disagreeing!) was taking the position that GM-food risk perceptions was part of that same package as climate & nuclear ones.

Actually, the view that GM foods are “politically polarizing” is a common one.  “Unreasoning, anti-science” stances toward GM foods, according to this view, are for “liberals” what “unreasoning, anti-science” stances toward climate are for “conservatives.

But this is the toxic echo chamber once again.

As the 17.5 billion regular followers of this blog know (welcome, btw, to new readers!), GM foods get a big collective “enh,” at least in the view of the general public.  Most people have never really heard of GM foods, and happily consume humungous helpings of them at every meal.

Advocacy groups of a leftish orientation have been trying to generate concern—trying, moreover, by resort to exactly the “us-vs-them” incitement that is poisoning our science communication environment—but remarkably have been getting absolutely nowhere.

Here in the U.S.; matters are different in Europe. Why there but not here?! These things are truly mysterious—and if you don’t see that, you get a failing grade on the basic curiosity & imagination aptitude test.

Here are some data to illustrate that point and to answer Patrick’s question.

First, look at “packages”:

Here gun-possession, nuclear, GM-foods, and childhood-vaccine risk perceptions are plotted in relation to climate change risk perceptions (the plotted lines reflect locally weighted regression — they are “truer” to the raw data than a lnear regression line, reflecting the correlation coefficient I’ve also reported for each, would be).

Yes, GM food risk perceptions are correlated with global warming ones.  But the effect is very modest. It’s nothing like correlation between guns and climate change or nuclear and climate change.  You’ll find plenty of people—ones without two heads and who don’t think contrails are a government plot—who think climate change is a joke but GM foods a serious threat, and vice versa.

It’s really not part of the “climate change risk perception family.”

How about in terms of “type”?

Enlarging a bit on some data that I’ve reported before, here are various risk perceptions plotted in relation to conventional left-right political views (measured with a composite scale that combines responses to party-identification and liberal-conservative ideology items):

Pretty clear, I think, that GM foods is just not a left-right issue.

As regular readers know, I’ve also examined GM food risks in relation to other types of “type” indicators, including the cultural cognition worldview scales and “interpretive community” scales derived from environmental risk perceptions.  It just doesn’t connect in a practically meaningful way.

So what to say?

Well, for one thing, there’s certainly no reason for embarrassment in finding out that things aren’t exactly as one conjectured on these matters.

As I said,  “risk packages”—because they reflect unobservable or “latent” dispositions, and because we are constrained to rely on partial and skewed impressions when we observe them—definitely have fuzzy peripheries.

In addition, the packages breed dynamics of misinformation, including the echo chamber effect and strategic behavior by deliberate science-communication environment polluters.

Under these circumstances, we should all adopt a stance of conscious provisionally toward our impressions here.  We shouldn’t “disbelieve” what our senses tell us, but we should expect evidence now & again that we have misperceived—and indeed, seek out such evidence before making decisions of consequence that turn on whether our perceptions are correct.

In the words of a famous scholar of risk perception—I can’t remember his name; early sign of senility?nah, couldn’t be!—said (in some other language, but this is rough translation), “trust but verify!”

Maybe it’s just me, but I actually love it when evidence bites me in the ass on something like this.

Not just because I want to be sure the beliefs I hold are free of error, although of course I do feel that way.

But because every time the evidence surprises me I experience anew the sense of wonder at this phenomenon.

What is going on here?!  Why are there packages? Who are the “types”?

Why do some “risks”but not others become entangled in conflicts between diverse groups—all of which are amply stocked with individuals who are high in science comprehension and all of which have intact practices for transmitting what’s collective known to their members?

I really want to know the answers—and I know that I still just don’t!

“Tomorrow,” in fact, I’ll show you something that  is definitely freaking me out!

Leave a Comment

error: