What is the expert consensus on whether the death penalty deters murders—or instead increases them through a cultural “brutalization effect”?
What is the expert consensus on whether permitting citizens to carry concealed handguns in the public increases homicide—or instead decreases it by discouraging violent predation?
According to the National Research Council, the research arm of the National Academy of Sciences, the expert consensus answer to these two questions is the same:
It’s just not possible to say, one way or the other.
Last April (way back in 2012), an expert NRC panel charged with determining whether the “available evidence provide[s] a reasonable basis for drawing conclusions” about the impact of the death penalty
concluded that research to date on the effect of capital punishment on homicide is not informative about whether capital punishment decreases, increases, or has no effect on homicide rates. Therefore, the committee recommends that these studies not be used to inform deliberations requiring judgments about the effect of the death penalty on homicide. Consequently, claims that research demonstrates that capital punishment decreases or increases the homicide rate by a specified amount or has no effect on the homicide rate should not influence policy judgments.
Way way back in 2004 (surely new studies have come out since, right?), the expert panel assigned to assess the “strengths and limitations of the existing research and data on gun violence,”
found no credible evidence that the passage of right-to-carry laws decreases or increases violent crime, and there is almost no empirical evidence that the more than 80 prevention programs focused on gun-related violence have had any effect on children’s behavior, knowledge, attitudes, or beliefs about firearms. The committee found that the data available on these questions are too weak to support unambiguous conclusions or strong policy statements.
The expert panels’ determinations, moreover, were based not primarily on the volume of data available on these questions but rather on what both panels saw as limitations inherent in the methods that criminologists have relied on in analyzing this evidence.
In both areas, this literature consists of multivariate regression models. As applied in this context, multivariate regression seeks to extract the causal impact of criminal laws by correlating differences in law with differences in crime rates “controlling for” the myriad other influences that could conceivably be contributing to variation in homicide across different places or within a single place over time.
Inevitably, such analyses involve judgment calls. They are models that, like many statistical models, must make use of imprecise indicators of unobserved and unobservable influences, the relationship of which to one another must be specified based on a theory that is itself independent of any evidence in the model.
The problem, for both the death penalty and concealed-carry law regression studies, is that results come out differently depending on how one constructs the models.
“The specification of the death penalty variables in the panel models varies widely across the research and has been the focus of much debate,” the NRC capital punishment panel observed. “The research has demonstrated that different death penalty sanction variables, and different specifications of these variables, lead to very different deterrence estimates—negative and positive, large and small, both statistically significant and not statistically significant.”
That’s exactly the same problem that the panel charged with investigating concealed-carrry laws focused on:
The committee concludes that it is not possible to reach any scientifically supported conclusion because of (a) the sensitivity of the empirical results to seemingly minor changes in model specification, (b) a lack of robustness of the results to the inclusion of more recent years of data (during which there were many more law changes than in the earlier period), and (c) the statistical imprecision of the results.
This problem, both panels concluded, is intrinsic to the mode of analysis being employed. It can’t be cured with more data; it can only be made worse as one multiplies the number of choices that can be made about what to put in and what to leave out of the necessarily complex models that must be constructed to account for the interplay of all the potential influences involved.
“There is no empirical basis for choosing among these [model] specifications,” the NRC death penalty panel wrote.
[T]here has been heated debate among researchers about them…. This debate, however, is not based on clear and principled arguments as to why the probability timing that is used corresponds to the objective probability of execution, or, even more importantly, to criminal perceptions of that probability. Instead, researchers have constructed ad hoc measures of criminal perceptions. . . .
Even if the research and data collection initiatives discussed in this chapter are ultimately successful, research in both literatures share a common characteristic of invoking strong, often unverifiable, assumptions in order to provide point estimates of the effect of capital punishment on homicides.
The NRC gun panel said the same thing:
It is also the committee’s view that additional analysis along the lines of the current literature is unlikely to yield results that will persuasively demonstrate a causal link between right-to-carry laws and crime rates (unless substantial numbers of states were to adopt or repeal right-to-carry laws), because of the sensitivity of the results to model specification. Furthermore, the usefulness of future crime data for studying the effects of right-to-carry laws will decrease as the time elapsed since enactment of the laws increases. If further headway is to be made on this question, new analytical approaches and data sets will need to be used.
So to be sure, the NRC reached its “no credible evidence” conclusion on right-to-carry laws way back in 2004. But its conclusion was based on “the complex methodological problems inherent in” regression analysis–the same methodological problem that were the basis of the NRC’s 2012 conclusion that death penalty studies are “not informative” and “should not influence policy judgments.”
Nothing’s changed on that score. The experts at the National Academy of Sciences either are right or they are wrong to treat multivariate regression analysis as an invalid basis for inference about the effects of criminal law.
The reasoning here is all pretty basic, pretty simple, something that any educated, motivated person could figure out by sitting down with the reports for a few hours (& who wouldn’t want to do that?!).
Yet all of this has clearly evaded the understanding of many extremely intelligent, extremely influential participants in our national political conversation.
I’ll pick on the New York Times, not because it is worse than anyone else but because it’s the newspaper I happen to read everyday.
Just the day before yesterday, it said this in an editorial about the NRC’s capital punishment report:
A distinguished committee of scholars convened by the National Research Council found that there is no useful evidence to determine if the death penalty deters serious crimes. Many first-rate scholars have tried to prove the theory of deterrence, but that research “is not informative about whether capital punishment increases, decreases, or has no effect on homicide rates,” the committee said.
Okay, that’s right.
But here is what the Times’ editorial page editor said the week before last about concealed carry laws:
Of the many specious arguments against gun control, perhaps the most ridiculous is that what we really need is the opposite: more guns, in the hands of more people, in more places. If people were packing heat in the movies, at workplaces, in shopping malls and in schools, they could just pop up and shoot the assailant. . . . I see it differently: About the only thing more terrifying than a lone gunman firing into a classroom or a crowded movie theater is a half a dozen more gunmen leaping around firing their pistols at the killer, which is to say really at each other and every bystander. It’s a police officer’s nightmare. . . . While other advanced countries have imposed gun control laws, America has conducted a natural experiment in what happens when a society has as many guns as people. The results are in, and they’re not counterintuitive.
Wait a sec…. What about the NRC report? Didn’t it tell us that the “results are in” and that “it is not possible to reach any scientifically supported conclusion” on whether concealed carry laws increase or decrease crime?
I know the New York Times is aware of the NRC’s expert consensus report on gun violence. It referred to the report in an editorial just a couple days earlier.
In that one, it called on Congress to enact a national law that would require the 35 states that now have permissive “shall issue” laws—ones that mandate officials approve the application of any person who doesn’t have a criminal record or history of mental illness—to “set higher standards for granting permits for concealed weapons.” “Among the arguments advanced for these irresponsible statutes,” it observed,
is the claim that ‘shall issue’ laws have played a major role in reducing violent crime. But the National Research Council has thoroughly discredited this argument for analytical errors. In fact, the legal scholar John Donohue III and others have found that from 1977 to 2006, ‘shall issue’ laws increased aggravated assaults by “roughly 3 to 5 percent each year.
Sigh.
Yes, the NRC concluded that there was “no credible evidence” that concealed carry laws reduce crime.
But as I pointed out, what it said was that it “found no credible evidence that the passage of right-to-carry laws decreases or increases violent crime.” So why shouldn’t we view the Report as also “thoroughly discrediting” the Times editorial’s conclusion that those laws“seem almost designed to encourage violence?”
And, yes, the NRC can be said (allowing for loose translation of more precise and measured language) to have found “analytical errors” in the studies that purported to show shall issue laws reduce crime.
But those “analytical errors,” as I’ve pointed out, involve the use of multivariate regression analysis to try to figure out the impact of concealed carry laws. That’s precisely the sort of analysis used in the Donohue study that the Times identifies as finding shall issue laws increased violent crime.
The “analytical errors” that the Times refers to are inherent in the use of multivariate regression analysis to try to understand the impact criminal laws on homicide rates.
That’s why the NRC’s 2012 death penalty report said that findings based on this methodology are “not informative” and “should be ignored for policy analysis.”
The Times, as I said, got that point. But only when it was being made about studies that show the death penalty deters murder, and not when it was being made about studies that find concealed carry laws increase crime….
This post is not about concealed carry laws (my state has one; I wish it didn’t) or the death penalty (I think it is awful).
It is about the obligation of opinion leaders not to degrade the value of scientific evidence as a form of currency in our public deliberations.
In an experimental study, the CCP found that citizens of diverse cultural outlooks all believe that “scientific consensus” is consistent with the position that predominates within their group on climate change, concealed carry laws, and nuclear power. Members of all groups were correct – 33% of the time.
How do ordinary people (ones like you & me, included) become so thoroughly confused about these things?
The answer, in part, is that they are putting their trust in authoritative sources of information—opinion leaders—who furnish them with a distorted, misleading picture of what the best available scientific evidence really is.
The Times, very appropriately, has published articles that attack the NRA for seeking to block federal funding of the scientific study of firearms and homicide. Let’s not mince words: obstructing scientific investigation aimed at promoting society’s collective well-being is a crime in the Liberal Republic of Science.
But so is presenting an opportunistically distorted picture of what the state of that evidence really is.
The harm that such behavior causes, moreover, isn’t limited to the confusion that such a practice creates in people who (like me!) rely on opinion leaders to tell us what scientists really believe.
It includes as well the cynicism it breeds about whether claims about scientific consensus mean anything at all. One day someone is bashing his or her opponents over the head for disputing or distorting “scientific consensus”—and the next day that same someone can be shown (incontrovertibly and easily) to be ignoring or distorting it too.
By the way, John Donohue is a great scholar, one of the greatest empirical analysts of public policy ever.
Both of the NRC expert consensus reports that I’ve cited conclude that studies he and other econometricians have done are “not informative” for policy because of what those reports view as insuperable methodological problems with multivariate analysis as a tool for understanding the impact of law on crime.
Donohue disagrees, and continues to write papers reanalyzing the data that the NRC (in its firearms study) said are inherently inconclusive because of “complex methodological problems” inherent in the statistical techniques that Donohue used, and continues to use, to analyze them.
But that’s okay.
You know what one calls a scientist who disputes “scientific consensus”?
A scientist.
But that’s for another day.