Do mass political opinions cohere? And do psychologists “generalize without evidence” more often than political scientists?

Stats Legend Andrew Gelman (whose blog everyone who enjoys being surprised and who values high-quality analytical thinking should read daily) has an interesting post on Steven Pinker.

Pinker asks “[w]hy, if you know a person’s position on gay marriage, can you predict that he or she will want to increase the military budget and decrease the tax rate,” a question he answers by observing that “[p]olitical philosophers have long known that the ideologies are rooted in different conceptions of human nature — a conflict of visions so fundamental as to align opinions on dozens of issues that would seem to have nothing in common.”

Gelman responds by (1) doing some quick GSS correlations, on the basis of which he concludes that “attitudes on such diverse issues are not so highly correlated”; and then (2) attributing Pinker’s error to Pinker’s being a psychologist rather than a political scientist and thus prone to “present[ing] ideas that are thought-provoking but . . . too general to quite work,” in contrast to political scientists who “take such ideas and try to adapt them more closely to particular circumstances.”

Some thoughts:

1. Pinker is clearly right to note that mass political opinions on seemingly diverse issues cohere, and Andrew, I think, is way too quick to challenge this.

I could cite to billions of interesting papers, but I’ll just show you what I mean instead. A recent CCP data collection involving a nationally representative on-line sample of 1750 subjects included a module that asked the subjects to indicate on a six-point scale “how strongly . . . you support or oppose” a collection of policies:

  1. policy_gun  Stricter gun control laws in the United States.
  2. policy_healthcare  Universal health care.
  3. policy_taxcut  Raising income taxes for persons in the highest-income tax bracket.
  4. policy_affirmative action  Affirmative action for minorities.
  5. policy_warming  Stricter carbon emission standards to reduce global warming.

Positions clustered on these “diverse” items big time. The average inter-item correlation was 0.66. The Cronbach’s alpha—a scale reliability measure based on item covariance and the number of items—was 0.91.

This is a degree of coherence that would  make any social scientist – psychologist or political scientist – beam. The highest possible alpha is 1.0, and anything above 0.70 is usually regarded as signifying a high degree of reliability.  Low reliability, measured in this way, is it’s own punishment, since it constrains the power of any sort of explanatory or predictive model involving the scale. With a score of 0.91 you can be confident that the power of your model won’t be dissipated by the noise associated with the imprecision of the observable “indicators” you are using to measure the latent variable.

The latent variable being picked up by these policy items is obviously something akin to right-left political preferences, so let’s call the resulting measure “Liberal_policy.” (Additional items cohered better with each other than with these, forming a second “libertarian policy prefernce” scale; but let’s keep things simple.)

Being able to form a scale like this with a general population sample is pretty good evidence in itself (and better than just picking two items out of GSS and seeing if they correlate) that people’s opinions on such matters cohere.

But just to make the case even stronger, let’s consider how much of the variance in liberal policy preferences can be explained by ideology.

In the same data set, there was a five-point measure for self-described “liberal-conservative ideology” and a  seven-point one for identification with the two major political parties. Those two items were also highly correlated (r = 0.70), so I combined them into a scale (α = 0.82) coded to represent a right-wing ideological disposition, which I labeled “Conserv_repub.”

Regressing Liberal_policy on Conserv_repub, I discovered that the percentage of variance explained (R2) was 0.60. That’s high, as any competent psychologist or political scientist would tell you, and as I’m sure Andrew would agree!

Now Andrew noted that the degree of coherence in political preferences tends to be conditional on other characteristics, such as wealth, education, and political interest. Typically, political scientists use a “political knowledge” measure to assess how coherence in ideological positions vary.

I had a measure of that (a 9-item civics-test sort of thing) in the data set too. So I added it and a cross-product interaction term to my regression model. It bumped up the R2 – variance explained – by 4%, an increment that was statistically significant.

Seems small, but how practically important is that? A commenter on Andrew’s blog noted that I tend to criticize fixating on R2 as an effect-size measure; my point, which is one that good social scientists—political scientists and psychologists! Andrew too!–have been making for decades is that Ris not a good measure of the practical significance of an effect size, a matter that has to be determined by use of judgment with relation to the phenomenon at issue.

Well, to help us figure that out, I ran a Monte Carlo simulation to generate the predicted probability that a typical “Liberal Democrat” (-1 SD on Conserv_Repub) and a typical “Conservative Republican” (+1 SD) would support “stricter gun control laws” (seems topical; this is pre-Newtown, so it would be interesting to collect some data now to follow up), conditional on being “low” (-1 SD) or “high” (+1 SD) in political knowledge.

Seems (a) like variance in political knowledge (whatever its contribution to R2) can matter a lot – the probability that a high–political-knowledge Republican will oppose gun control is a lot lower than that for a low–political-knowledge one—but (b) there is still plenty of disagreement even among low–political-knowledge subjects.

I’d say, then, that Andrew is being a bit too harsh on Pinker’s premise about political preference coherence.

2. Pinker is clearly wrong—not just in his answer but in his style of reasoning—to connect this sort of coherence to “different conceptions of human nature” among people of opposing ideologies

Pinker, however, is indeed doing something very objectionable: he is engaged in rank story-telling.

He notes that political philosophers identify ideologies with different conceptions of “human nature,” a “conflict of visions so fundamental as to align opinions on dozens of issues.” Well, maybe political philosophers do do that. But the idea that “different conceptions of ‘human nature’ ” explain coherence and variance in mass political opinion is an empirical claim, and as far as I know there’s not any support for it.

I think it’s almost certainly false. Measures of ideology of the sort that I have used here have not – as far as I know; please do tell me if I’m wrong: the pleasure of learning something new will more than compensate me for the embarrassment of being shown to be ignorant — been validated as predictors of “different conceptions of human nature.” Indeed, I think the idea that ordinary members of the public have “conceptions of human nature” is extravagant—the sort of thing only someone who has never ventured outside a university campus would likely believe.

There are myriad theories about the puzzling question of how ordinary people, who really aren’t philosophers, aren’t that interested in politics, and who are very consumed with other things can manage to form coherent ideological preferences. And they’ve been tested empirically.

It’s irritating for anyone who is familiar with all that work to see Pinker advance the sort of claim he does—which he presents not even as a conjecture but as a simple, unqualified, fact-of-the matter report.

3. Pinker’s mistake is one psychologists would resent as much as political scientists.

The sort of thing Pinker is doing here generalizes.  Popular commentators love to reach into the grab bag of decision science mechanisms and construct just-so stories that purport to “explain” complicated phenomena (e.g., political controversy over climate change).

Good social scientists hate this.  Indeed, Pinker generally doesn’t like it, in fact; he complains about this practice in his excellent book, The Better Angels of Our Nature: Why Violence Has Declined, which admirably tries to connect trends in violence over history to mechanisms that themselves have support in evidence. I find it sort of deflating to see that he seems to adopt a different approach in the writing he does for the New York Times.

But the point is, resentment of story-telling is something that psychologists and political scientists would both experience. It’s not a consequence of Pinker being a psychologist!

4. Ironically, Andrew is making the sort of mistake he says Pinker made.

This last point follows from all the others. Andrew sees Pinker doing something irritating, and then treats a conjecture (I think a pretty uninteresting, implausible one; but all conjectures are created equal – test away!) as a general law that explains this particular instance, etc.

But now I will offer a conjecture, based on an observation-grounded theory.

The observation-grounded theory is that Andrew Gelman has a virtuous Bayesian disposition. That is, he is the sort of person who very happily updates and revises his views, which he always regards as just provisional estimates anyway.

The conjecture: that Andrew, on reflection, will agree that he offered a poor diagnosis (“psychologists generalize without evidence, unlike political scientists, who look for concrete evidence in particulars!”) of Pinker’s objectionable style of argumentation here (which, again, strikes me as uncharacteristic of Pinker himself!).

And now, let’s collect some evidence.

(One more prediction, or hope: Andrew will like my graphic!)

p.s. Ideological coherence in policy prefernces isn’t nearly as interesting — nearly as surprising, puzzling –as ideological or cultural coherence in factual beliefs (e.g., “earth is/is not heating up” & “children of gay & lesbians do worse/no worse in life than ones raised by heterosexual parents.” That’s what CCP research is all about. Perhaps I’ll do another post on that.

References

Abelson, R.P. A Variance Explanation Paradox: When a Little is a Lot. Psychological Bulletin 97, 129-133 (1985).

Delli Carpini, M.X. & Keeter, S. What Americans Know About Politics and Why It Matters. (Yale University Press, New Haven; 1996).

Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models. (Cambridge University Press, Cambridge ; New York; 2007).

John, O.P. & Benet-Martínez, V. in Handbook of research methods in social and personality psychology. (eds. H.T. Reis & C.M. Judd) 339-369 (Cambridge University Press, New York; 2000).

King, G. How Not to Lie with Statistics. Am. J. Pol. Sci.30, 666-687 (1986). 

King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci 44, 347-361 (2000).

Pinker, S. The better angels of our nature : why violence has declined. (Viking, New York; 2011).

Leave a Comment

error: