Culturally polarized Australia: Cross-cultural cultural cognition, Part 3 (and a short diatribe about ugly regression outputs)

In a couple of previous posts (here & here), I have discussed the idea of “cross-cultural cultural cognition” (C4) in general and in connection with data collected in the U.K. in particular. In this one, I’ll give a glimpse of some cultural cognition data from Australia.

The data come from a survey of large, diverse general population sample. It was administered by a team of social scientists led by Steven Hatfield-Dodds, a researcher at the Australian National University. I consulted with the Hatfield-Dodds team on adaptation of the cultural cognition measures for use with Australian survey respondents.

It was a pretty easy job! Although we experimented with versions of various items from the “long form” cultural cognition battery, and with a diverse set of items distinct from those, the best performing set consisted of the two six-item sets that make up the “short form” versions of the CC scales. The items were reworded in a couple of minor ways to conform to Australian idioms.

Scale performance was pretty good. The items loaded appropriately on two distinct factors corresponding to “hierarchy-egalitarianism” and “individualism-communitarianism,” which had decent scale-reliability scores. I discussed these elements of scale performance more in the first couple of posts in the  Cseries.

The Hatfield-Dodds team included the CC scales in a wide-ranging survey of beliefs about and attitudes toward various aspects of climate change. Based on the results, I think it’s fair to say that Australia is at least as culturally polarized as the U.S.

The complexion of the cultural division is the same there as here. People whose values are more egalitarian and communitarian tend to see the risk of climate change as high, while those whose values are more hierarchical and individualistic see it as low. This figure reflects the size of the difference as measured on a “climate change risk” scale that was formed by aggregating five separate survey items (Cronbach’s α = 0.90):

Looking at individual items helps to illustrate the meaning of this sort of division — its magnitude, the sorts of issues it comprehends, etc.

Asked whether they “believe in climate change,” e.g., about 50% of the sample said “yes.” Sounds like Australians are ambivalent, right? Well, in fact, most of them are pretty sure — they just aren’t, culturally speaking, of one mind. There’s about an 80% chance that a “typical” egalitarian communitarian,” e.g., will say that climate change is definitely happening; the likelihood that a hierarchical individualist will, in contrast, is closer to 20%.

There’s about a 25% chance the hierarchical individualist will instead say, “NO!” in response to this same question. There’s only a 1% chance that an egalitarian communitarian in Australia will give that response!

BTW, to formulate these estimates, I fit a multinomial logistic regression model to the responses for the entire sample, and then used the parameter estimates (the logit coefficients and the standard errors) to run Monte Carlo simulations for the indicated “culture types.” You can think of the simulation as creating 1,000 “hierarch individualists” and 1,000 “egalitarian communitarians” and asking them what the they think. By plotting these simulated values, anyone, literally, can see, literally, the estimated means and the precision of those estimates associated with the logit model. No one — not even someone well versed in statistics — can see such a result like in a bare regression output like this:

Yet this sort of table is exactly the kind of uninformative reporting that most social scientists (particularly economists) use, and use exclusively.  There’s no friggin’ excuse, for this, either, given that public-spirited stats geniuses like Gary King have not only been lambasting this practice for years, but also producing free high-quality software like Clarify, which is what I used to run the Monte Carlo simulations here (the graphic reporting technique I used–plotting the density distributions of the simulated values to illustrate the size and precision of contrasting estimates–is something I learned from King’s work too).

So don’t be awed the next time someone puts a mindless table like this in a paper or on a powerpoint slide; complain!

Oh …. There are tons of cool things in the Hatfield-Dodds et al. survey, and I’m sure we’ll write them all up in the near future. But for now here’s one more result from the Australia Cstudy:

Around 20% of the survey respondents indicated that climate change was caused either “entirely” or “mainly” by “nature” rather than by “human activity.”  But the likelihood that a typical hierarchical individualist would view climate change was around 48% (+/-, oh, 7% at 0.95 confidence, by the looks of the graphic). Only about 5% chance an egalitarian communitarian would treat humans as an unimportant contributor to climate change.

You might wonder how about 50% of the hierarch individualists one might find in Australia would likely tell you that “nature” is causing climate change when less than 25% are likely to say “yes” if you ask them whether climate change is happening.

But you really shouldn’t. You see, the answers people give to individual questions on a survey on climate change aren’t really answers to those questions. They are just expressions of a global pro-con attitude toward the issue. Psychometrically, the answers are observable “indicators” of a “latent” variable. As I’ve explained before, in these situations, it’s useful to ask a bunch of different questions and aggregate them– the resulting scale (which will be one or another way of measuring the covariance of the responses) will be a more reliable (i.e., less noisy) measure of the latent attitude than any one item.  Although if you are in a pinch — and don’t want to spend a lot of money or time asking questions — just one item, “the industrial strength risk perception measure,” will work pretty well!

The one thing you shouldn’t do, though, is get all excited about responses to specific items or differences among them. Pollsters will do that because they don’t really have much of a clue about psychometrics.

Hmmm… maybe I’ll do another post on “pollster” fallacies — and how fixation on particular questions, variations in the responses between them, and fluctuations in them over time mislead people on public opinion on climate change.

Leave a Comment

error: