Happily, Chris Mooney has indicated that he is planning to take up the points I made in my post on his Republican Brain, and also the data I collected to help test surmises and hunches I formed while reflecting on his book.
I certainly want to give him his chance to present his position in full without the distraction of piecemeal qualifications, clarifications, and counterarguments.
But his first post does make me regret a part of mine, in which I conveyed low regard for what is in fact high-quality work.
The bungling occurs in the paragraph that “questions” the “validity” of self-reported reasoning-style measures and describes the evidence for their validity as “spare.”
I do happen to believe the Cognitive Reflection Test is more predictive of vulnerability to one or another form of bias associated with what Kahneman calls System 1 (unreflective, fast) reasoning. I think this because of various recent studies, including ones in the links & references in that post. It’s also pretty well established that people who score high on all manner of reasoning-quality measures are not better than ones who score low in consciously assessing their own vulnerability to bias—so it stands to reason, I think, that we should try to use objective or performance-based measures and not self-reporting ones to predict individual differences in reasoning styles.
But how best to measure reasoning styles and reasoning quality is not a settled issue–indeed, it’s at the heart of a very interesting scholarly debate.
Moreover, “validity” is not what’s at stake in that debate; predictive power is. My language was recklessly imprecise. I am truly embarrassed by that.
What I should have confined myself to saying is that these measures have not been validated as indicators of motivated reasoning. That’s the dynamic that is understood – by Chris and by many scholars, including ones whose work he cites – to be driving ideological polarization over issues that admit of scientific investigation.
Indeed, far from being understood to predict motivated cognition, the sorts of measures of dual process reasoning that came before CRT were understood *not* to. There is ample work showing that higher-level reasoning processes thought to be measured by these scales can be recruited for identity-protection and other sorts of motivated reasoning.
So why suppose that any correlation between them and ideology predicts motivated reasoning or otherwise explains conflict over policy-relevant science? I very much do want to pose a (respectful!) challenge—one aimed at enlarging our mutual understanding—to those scholars who think that disparities in systematic or reflective reasoning, however measured and on the part of any group, is the explanation for this phenomenon.
The study I conducted was meant to explore that. I used CRT as my measure of high-quality reasoning because it is in fact now at the cutting edge of dual process reasoning research, largely as a result of the emphasis that Kahenman puts on it as the best measure of the tendency to use System 2 as opposed to System 1 reasoning. I found no meaningful correlation between CRT and ideology—which seems to me to be reason to doubt that ideology correlates with the sorts of cognitive biases that quality-of-reasoning measures in general are supposed to measure.
But in assessing the thesis of Republican Brain — that conservative ideology is associated with styles of thought responsible for political conflict over policy-relevant science — I don’t think anything at all turns on whether CRT or any other measure is better for measuring vulnerability to cognitive biases. What matters is experimental proof of the vulnerability to motivated reasoning—and whether there’s any correlation between that and either ideology or higher-level cognition. That’s what the experiment was designed to show, those who use higher-quality reasoning are not immune from motivated reasoning.
In the study, subjects conformed their own assessment of the validity of CRT as a predictor of bias to their ideological predispositions.
Conservatives did this.
But so did liberals: they tended to agree that the CRT is a valid test of “reflectiveness” and “open-mindedness” when they were told that people who credit evidence of climate change scored high on it. But when told that people who are skeptical in fact score higher– well, then then they were much more likely to dismiss CRT as invalid for that purpose.
What’s more, that effect was magnified by high scores on CRT: people who are more disposed to system 2 reasoning (as measured by CRT) were much more likely to fit their assessments of CRT’s validity to their ideological predispositions.
So liberals and conservatives displayed motivated reasoning. And they both did it more if they were the sorts of people inclined to use high-quality cognition as reflected in a very prominent measure of reflective, open-minded reasoning.
That’s evidence, I think, that the brains of liberals and conservatives are alike in this respect. And it’s all the more reason to doubt that correlations between ideology and reasoning-style measures can help us to figure out why or when deliberations over policy-relevant science are prone to political polarization or what we should do to try to minimize that sad spectacle.