This is the last of installment of my series on “probing/prodding” the Republican Brain Hypothesis (RBH). RBH posits that conservative ideology is associated with dogmatic or unreflective reasoning styles that dispose conservative people to be dismissive of policy-relevant science on climate change and other issues. This is the basic thesis of Chris Mooney’s book The Republican Brain, which ably collects and synthesizes the social science data on which the claim rests.
As I’ve explained, I’m skeptical of RHB. Studies conducted by CCP link conflict over policy-relevant science to a form of motivated reasoning to which citizens of all cultural and ideological persuasions seem worrisomely vulnerable. The problem, I believe, isn’t that citizens with one or another set of values can’t or won’t use reason; it’s that the science communication environment –on which the well-being of all citizensdepends —has become contaminated by antagonistic cultural meanings.
In the first installment in this series, I stated why I thought the social science work that RHB rests on is not persuasive: vulnerability to culturally or ideologically motivated reasoning is not associated with any of the low-quality reasoning styles that various studies find to be correlated with conservatives. On the contrary, there is powerful evidence that higher-quality reasoning styles characterized by systematic or reflective thought can magnify the tendency to fit evidence to ideological or cultural predispositions when particular facts (the temperature of the earth; the effectiveness of gun control; the health effects of administering the HPV vaccine for school girls) become entangled in cultural or ideological rivalries.
In the second installment, I described an original study that adds support to this understanding. In that study, I found, first, that one reliable and valid measure of reflective and open-minded reasoning, the Cognitive Reflection Test (CRT), is not meaningfully correlated with ideology; second, that conservatives and liberals display ideologically motivated reasoning when considering evidence of whether CRT is a valid predictor of open-mindedness toward scientific evidence on climate change; and third, that this tendency to credit and dismiss evidence in an ideologically slanted way gets more intense as both liberals and conservatives become more disposed to uses reflective or systematic reasoning as measured by their CRT scores.
If this is what happens when people consider evidence on culturally contested issues like climate change (and this is not the only study that suggests it is), then they will end polarized on policy-relevant science no matter what the correlation might be between their ideologies and the sorts of reasoning-style measures used in the studies collected in Republican Brain.
But there’s one last point to consider: the asymmetry thesis.
Mooney, who is scrupulously fair minded in his collection and evaluation of the data, acknowledges that there is evidence that liberals do sometimes display motivated cognition. But he believes, on balance (and in part based on the studies correlating ideology with quality-of-reasoning measures) that a tendency to defensively resist ideologically threatening facts is greater among Republicans—i.e., that this psychological tendency is asymmetric and not symmetric with respect to ideology.
The study I conducted furnishes some relevant data there, too.
The results I reported suggest that ideologically motivated reasoning occurred in the study subjects: how likely they were to accept that the CRT is valid depended on whether they were told the test had found “more” bias in people who share the subjects own ideology or reject it. This ideological slant got bigger, moreover, as subjects’ CRT scores increased.
But the statistical test I used to measure this effect—a multivariate regression—essentially assumed the effect was uniform or linear with respect to subjects’ political leanings. If I had plotted the result of that statistical test on a graph that had political leanings (measured by “z_conservrepub,” a scale that aggregates responses to a liberal-conservative ideology measure and a party-affiliation measure) on the x-axis and subjects’ likelihood of “agreeing” that CRT is valid on the y-axis, the results would have looked like this for subjects who score higher than average on CRT:
The tendency to “agree” or “disagree” depending on the ideological congeniality of doing so looks even for conservative Republicans and liberal Democrats. But it is constrained to do so by the statistical model.
It is possible that the effect is in fact not even. This figure plots a hypothetical distribution of responses that is consistent with the asymmetry thesis.
Here people seem to adopt an ideologically opportunistic approach to assessing the validity of CRT only as the become more conservative and Republican; as they become more liberal and Democratic, in this hypothetical rendering, they are ideologically “neutral” with respect to their assessments. If one applies a linear model (or, as I did, a logistic regression model that assumes a symmetric sigmoid function), then an “asymmetry” of this sort could well escape notice!
But if one is curious whether an effect might not be linear, one can use a different statistical test. A polynomial regression fits a “curvilinear” model to the data. If the effect is not linear with respect to the explanatory variable (here, political outlook), that will show up in the model, the fit of which can be compared to the linear model.
So I fitted a polynomial model to the data from the experiment by adding an appropriate term (one that squared the effect of the interaction of CRT, ideology, and experimental condition). Lo and behold, that model fit better (see for yourself). The ideologically motivated reasoning that was generated by the experiment, and amplified by subjects disposition to engage in reflective information processing, really wasn’t linear!
But it wasn’t asymmetric in the sense contemplated by the ideological asymmetry thesis either! Where a “curvilinear” model fits best, one has to plot the effects of that model and see what it looks like in order to figure out what the nonlinear effect is and what it means. This figure (which illustrates the effect captured in the polynomial model by fitting a “smoothed,” local regression line to that model’s predicted values) does that:
I guess I’d say that subjects’ biased reasoning was “asymmetrical” with respect to the two experimental conditions: the intensity with which they credited or discredited ideological congenial evidence was slightly bigger in the condition that advised subjects the results of the (fictional) CRT studies had found “nonskeptics” on climate change to be closed-minded.But that was true, it seems, for those on both sides of the ideological spectrum.
In any event, the picture of what the “curvilinear” effect looks like is not even close to the picture the “asymmetry thesis” predicts. Both liberals and conservatives are engaged in motivated reasoning, and the effect is not meaningfully different for either.
Now, why go through all this? Well, obviously, because it’s fun! Heck, if you are actually read this post and have gotten this far, you must agree.
But there’s also a take-away: One can’t tell whether a motivated reasoning effect is truly “asymmetric” unless one applies the correct statistical test.
It’s pretty much inevitable that an effect observed in any sort of social science experiment won’t be “linear.” Even in the (unlikely) event that the phenomenon one is measuring is in fact genuinely linear, data always have noise, and effects therefore always have lumps with reference to the experimental and other influences that produce them.
If the hypothesis one is testing suggests a linear effect is likely to be right or close to it, one starts with a linear test and sees if the results holds up.
If one has the hypothesis that the effect is not linear, or suspects after looking at the raw data that it might not be and is interested to find out, then one must apply an appropriate nonlinear test. If that test doesn’t corroborate that there is in fact a curvilinear effect, and that the curvilinear model fits better than the linear one, then one doesn’t have sufficient evidence to conclude the effect isn’t linear.
Sometimes when empirical researchers examine ideologically motivated reasoning the raw or summary data might make it look like the effect is “bigger” for one ideological group than the other. But that’s not enough to conclude that the effect fits the asymmetry thesis. Any researcher who wants to test the asymmetry hypothesis still has to do the right statistical test before he or she can conclude that the data really support it.
I’m not aware of anyone who has conducted a study of ideologically motivated reasoning who has reported finding a curvilinear effect that fits the logic of the asymmetry thesis.
If you know of such a study, please tell me!
Post 1 in this “series”
Post 2 in it
p.s.
I’ve also plotted the results in the same fashion I did last time–essentially predicting the likelihood that a “high CRT” (CRT = 1.6) “conservative Republicant” (+1 SD on z_conservrepub) and a “high CRT” “liberal Democrat” (-1 SD) would view the CRT test as valid in the three experimental conditions.
The estimates in the top graph take the curvilinear effect into account, so they can be understood to be furnishing a reliable picture of the relative magnitude of the motivated reasoning effects for people with those respective characteristics. Looks pretty uniform, I’d say.
Otherwise, while the effects might be just a tad more dramatic, they clearly aren’t materially different from the ones brought into view with the ordinary logit model. No real point, I’d say, in treating the polynomical model as “better” in any interesting sense; it was just interesting to find out if the polynomial model would both fit better and alter the interpretation suggested the nonpolynomial model
Here’s another & likely more realistic picture of what one might expect to see if the “asymmetry thesis” (motivated reasoning greater on right than left) is true:
Here liberal Democrates are biased–crediting the validity of CRT when it is represented as showing that climate-change believers are more open-minded, dismissing its validity when it is represented as showing that climate-change skeptics are–but the degree of (mirror-image) bias is much greater for conservative Republicans. But again, this is not what one sees when one fits a polynomial model to the data.
It’s also not what one sees when one eyeballs the raw data (always an essential thing to do before one starts fits statistical models to the data). It’s hard to see much looking at a scatter plot because of the number of observations and the difficulty of segrating ones by ideology, experimental condition, and CRT. But it’s possible to see the raw data pattern pretty well if one fits a lowess (locally regression line) to them:
I fit such lines to “high” CRT subjects (CRT > 0) for each condition. The shape of the green & black curves is (or at least mainly is) due to the sigmoidal distribution one would expect when responses to a dichotomous variable (“agree” or “disagree” that CRT is a valid test of open-mindedness) is plotted against a meaningful predictor. The key question, for the asymmetry thesis, is whether the mirror-image green & black S curves seem to be skewed — bigger curves to the right of the mean on z_conservrepub (the aggregate Likert scale that combines subjects’ ideology & political-party affiliation scores). I don’t see it; do you? The output from the polynomial regression (graphically displayed in the post) confirms that the “best fitting” model doesn’t have the sort of skew one would expect if one were betting on the asymmetry thesis.