Everyone knows that science journalist Chris Mooney has written a book entitled The Republican Brain. In it, he synthesizes a wealth of social science studies in support of the conclusion that having a conservative political outlook is associated with lack of reflection and closed-mindedness.
I read it. And I liked it a lot.
Mooney possess the signature craft skills of a first-rate science journalist, including the intelligence (and sheer determination) necessary to critically engage all manner of technical material, and the expositional skill required to simultaneously educate and entertain.
He’s also diligent and fair minded.
And of course he’s spirited: he has a point of view plus a strong desire to persuade—features that for me make the experience of reading Mooney’s articles and books a lot of fun, whether I agree with his conclusions (as often I do) or not.
As it turns out, I don’t feel persuaded of the central thesis of The Republican Brain. That is, I’m not convinced that the mass of studies that it draws on supports the inference that Republicans/conservatives reason in a manner that is different from and less reasoned than Democrats/liberals.
The problem, though, is with the studies, not Mooney’s synthesis. Indeed, Mooney’s account of the studies enabled me to form a keener sense of exactly what I think the defects are in this body of work. That’s a testament to how good he is at what he does.
In this, the first of two (additional; this issue is impossible to get away from) posts, I’m going to discuss what I think the shortcomings in these studies are. In the next post, I’ll present some results from a new study of my own, the design of which was informed by this evaluation.
1. Validity of quality-of-reasoning measures
The studies Mooney assembles are not all of a piece but the ones that play the largest role in the book and in the literature correlate ideology or party affiliation with one or another measure of cognitive processing and conclude that conservativism is associated with “lower” quality reasoning or closed-mindedness.
These measures, though, are of questionable validity. Many are based on self-reporting; “need for cognition,” for example, literally just asks people whether the “notion of thinking abstractly is appealing to” them, etc. Others use various personality-style constructs like “authoritarian” personality that researchers believe are associated with dogmatism. Evidence that these sorts of scales actually measure what they say is spare.
Objective measures—ones that measure performance on specific cognitive tasks—are much better. The best of these, in my view, are the “cognitive reflection test” (CRT) which measures the disposition to check intuition with conscious analytica reasoning, and “numeracy,” which measures quantatative reasoning capacity, and includes CRT as a subcomponent.
These measures have been validated. That is, they have been shown to predict—very strongly—the disposition of people either to fall prey to or avoid one or another form of cognitive bias.
As far as I know, CRT and numeracy don’t correlate in any clear way with ideology, cultural predispositions, or the like. Indeed, I myself have collected evidence showing they don’t (and have talked with other researchers who report the same).
2. Relationship between quality-of-reasoning measures and motivated cognition
Another problem: it’s not clear that the sorts of things that even a valid measure of reasoning quality gets at have any bearing on the phenomenon Mooney is trying to explain.
That phenomenon, I take it, is the persistence of cultural or ideological conflict over risks and other facts that admit of scientific evidence. Even if those quality-of-reasoning measures that figure in the studies Mooney cites are in fact valid, I don’t think they furnish any strong basis for inferring anything about the source of controversy over policy-relevant science.
Mooney believes, as do I, that such conflicts are likely the product of motivated reasoning—which refers to the tendency of people to fit their assessment of information (not just scientific evidence, but argument strength, source credibility, etc.) to some end or goal extrinsic to forming accurate beliefs. The end or goal in question here is promotion of one’s ideology or perhaps securing of one’s connection to others who share it.
There’s no convincing evidence I know of that the sorts of defects in cognition measured by quality of reasoning measures (of any sort) predict individuals’ vulnerability to motivated reasoning.
Indeed, there is strong evidence that motivated reasoning can infect or bias higher level processing—analytical or systematic, as it has been called traditionally; or “System 2” in Kahneman’s adaptation—as well as lower-level, heuristic or “System 1” reasoning.
We aren’t the only researchers who have demonstrated this, but we did in fact find evidence supporting this conclusion in our recent Nature Climate Change study. That study found that cultural polarization—the signature of motivated reasoning here—is actually greatest among persons who are highest in numeracy and scientific literacy. Such individuals, we concluded, are using their greater facility in reasoning to nail down even more tightly the connection between their beliefs and their cultural predispositions or identities.
So, even if it were the case that liberals or Democrats scored “higher” on quality of reasoning measures, there’s no evidence to think they would be immune from motivated reasoning. Indeed, they might just be even more disposed to use it and use it effectively (although I myself doubt that this is true; as I’ve explained previously, I think ideologically motivated reasoning is uniform across cultural and ideological types.)
3. Internal validity of motivated reasoning/biased assimilation experiments
The way to figure out whether motivated reasoning is correlated with ideology or culture is with experiments. There are some out there, and Mooney mentions a few. But I don’t think those studies are appropriately designed to measure asymmetry of motivated reasoning; indeed I think many of them are just not well designed period.
A common design simply measures whether people with one or another ideology or perhaps existing commitment to a position change their minds when shown new evidence. If they don’t—and if in fact, the participants form different views on the persuasiveness of the evidence—this is counted as evidence of motivated reasoning.
Well, it really isn’t. People can form different views of evidence without engaging in motivated reasoning. Indeed, their different assessments of the evidence might explain why they are coming into the experiment in question with different beliefs. The study results, in that case, would be showing only that people who’ve already considered evidence and reached a result don’t change their mind when you ask them to do it again. So what?
Sometimes studies designed in this way, however, do show that “one side” budges more in the face of evidence that contradicts their position (on nuclear power, say) than the other does on that issue or on some other (say, climate change).
Well, again, this is not evidence that the one that’s holding fast is engaged in motivated reasoning. Again, those on that side might have already considered the evidence in question and rejected it; they might be wrong to reject it, but because we don’t know why they rejected it earlier, their disposition to reach the same conclusion again does not show they are engaged in motivated reasoning, which consists in a disposition to attend to information in a selective and biased fashion oriented to supporting one’s ideology.
Indeed, the evidence that challenges the position of the side that isn’t budging in such an experiment might in fact be weaker than the evidence that is moving the other side to reconsider. The design doesn’t rule this out—so the only basis for inferring that motivated reasoning is at work is whatever assumptions one started with, which gain no additional support from the study results themselves.
There is, in my view, only one compelling way to test the hypothesis that motivated reasoning explains the evaluation of information. That’s to experimentally manipulate the ideological (or cultural) implications of the information or evidence that subjects are being exposed to. If they credit that evidence when doing so is culturally/ideologically congenial, and dismiss it when doing so is ideologically uncongenial, then you know that they are fitting their assessment of information (the likelihood ratio they assign to it, in Bayesian terms) to their cultural or ideological predispositions.
CCP has done studies like that. In one, e.g., we showed that individuals who watched a video of protestors reported perceiving them to be engaged in intimidating behavior—blocking, obstructing, shouting in onlookers’ faces, etc.—when the subjects believed the protest involved a cause (either opposition to abortion rights or objection to the exclusion of gays and lesbians from the military) that was hostile to their own values. If the subjects were told the protestors’ cause was one that affirmed the subjects’ own values, then they saw the protestors as engaged in peaceful, persuasive advocacy.
That’s motivated reasoning. One and the same piece of evidence—videotaped behavior of political protests—was seen one way or another (assigned a likelihood ratio different from or equal to 1) depending on the cultural congeniality of seeing it that way.
In another study, we found that subjects engage in motivated reasoning when assessing the expertise of scientists on disputed risk issues. In that one, how likely subjects were to recognize a scientist as an “expert” on climate change, gun control, or nuclear power depended on the position that scientist was represented to be taking. We manipulated that—while holding the qualifications of the scientist, including his membership in the National Academy of Sciences, constant.
Motivated reasoning is unambiguously at work when one credits or discredits the same piece evidence depending on whether it supports or contradicts a conclusion that one finds ideologically appealing. And again we saw that process of opportunistic, closed-minded assessment of evidence at work across cultural and ideological groups.
Actually, CM discusses this second study in his book. He notes that the effect size—the degree to which individuals selectively afforded or denied weight to the view of the featured scientist depending on the scientists’ position—was larger in individuals who subscribed to a hierarchical individualistic worldviews (they tend to be more conservative) than in individuals who subscribed to an egalitarian, communitarian one. The former tend to be more conservative, the latter more liberal.
As elsewhere in the book, he was reporting with perfect accuracy here.
Nevertheless, I myself don’t view the study as supporting any particular inference that conservatives or Republicans are more prone to motivated reasoning. Both sides (as it were) displayed motivated reasoning—plenty of it. What’s more, the measures we used didn’t allow us to assess the significance of any difference in the degree of it that each side displayed. Finally, we’ve done other studies, including the one involving the videotape of the protestors, in which the effect sizes were clearly comparable in size.
But here’s the point: to be a valid, a study that finds asymmetry in ideologically motivated reasoning must allow the researcher both to conclude that subjects are selectively crediting or discrediting evidence conditional on its congruence with their cultural values or ideology and that one side is doing that to a degree that is both statistically and practically more pronounced than the other.
Studies that don’t do that might do other things–like supply occasions for sneers and self-congratulatory pats on the back among those who treat cheering for “their” poilitical ideology as akin to rooting for their favorite professional sports team (I know Mooney certainly doesn’t do that).
But they don’t tell us anything about the source of our democracy’s disagreements about various forms of policy-relevant science.
In the next post in this “series,” I’ll present some evidence that I think does help to sort out whether an ideologically uneven propensity to engage in ideologically motivated reasoning is the real culprit.
References
Chen, Serena, Kimberly Duckworth, and Shelly Chaiken. Motivated Heuristic and Systematic Processing. Psychological Inquiry 10, no. 1 (1999): 44-49.
Frederick, Shane. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, no. 4 (2005): 25-42.
Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).
Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).
Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Clim. Change advance online publication (2012).
Liberali, Jordana M., Valerie F. Reyna, Sarah Furlan, Lilian M. Stein, and Seth T. Pardo. “Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment.” Journal of Behavioral Decision Making (2011):advance on line publication.
Mooney, C. The Republican Brain: The Science of Why They Deny Science—and Reality. (John Wiley & Sons, Hoboken, NJ; 2012).
Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).
Weller, J.A., Dieckmann, N.F., Tusler, M., Mertz, C.K., Burns, W.J. & Peters, E. Development and Testing of an Abbreviated Numeracy Scale: A Rasch Analysis Approach. Journal of Behavioral Decision Making, advance on line publication (2012).