Modeling the incoherence of coherence based reasoning: report from Law & Cognition 2016

I’ve covered this ground before (in a 3-part set last yr) but this post supplies a compact recap of how coherence based reasoning (CBR), the dynamic featured in Session 5 of the Law & Cognition 2016 seminar, subverts truth-convergent information processing.

The degree of subversion is arguably more extreme, in fact, than that associated with any of the decision dynamics we’ve examined so far.

Grounded in aversion to residual uncertainty, CBR involves a fom of rolling, recursive confirmation bias.

Where decisionmaking evinces CBR, the factfinder engages in reasonably unbiased processing of the evidence early on in decisionmaking process. But the more confident she becomes in one outcome, the more she thereafter adjusts the weight—or in Bayesian terms the likelihood ratio—associated with subsequent pieces of independent evidence to conform her assessment of them to that outcome.

As her confidence grows, moreover, she revisits what appeared to her earlier on to be pieces of evidence that either contravened that outcome or supported it only weakly, and readjusts the weight afforded to them as well so as to bring them into line with her now-favored view.

By virtue of these feedback effects, decisions informed by CBR are marked by a degree of supreme confidence that belies the potential complexity and equivocality of the trial proof.

Such decisons are also characterized, at least potentially, by arbitrary sensitivity the order in which pieces of evidence are considered. Where both sides in a case have at least some strong evidence, which side’s strong evidence is encountered (or cognitively assimilated) “first” can determine the direction of the feedback dynamics that thereafter determine whether the other side’s proof is given the weight it’s due.

It should go without saying that this form of information processing is not truth convergent.

As reflected in the simple Bayesian model we have been using in the course, truth-convergent reasoning demands not only that the decisionmaker update her factual assessments in proportion to the weight—or likelihood ratio—associated with a piece of evidence; it requires that she determine the likelihood ratio on the basis of valid, truth-convergent criteria.

That isn’t happening under CBR.  CBR is driven by an aversion to complexity and equivocality that unconsciously induces the decisionmaker to credit and discredit evidence in patterns that result in a state of supreme over­confidence in an outcome that might well be incorrect.  The preference for coherence across diverse, independent pieces of evidence, then, is an extrinsic motivation that invests the likelihood ratio with qualities unrelated to the truth.

Just how inimical this process is to truth seeking can be usefully illustrated with a simple statistical simulation.

The key to the simulation is the “CBR function,” which inflates the likelihood ratio assigned to the evidence by a factor tied to the factfinder’s existing assessment of the probability of a particular factual proposition.  This element of the simulation models the tendency of the decisionmaker to overvalue evidence in the direction and in proportion to her confidence in a particular outcome.

In the simulation, the CBR factor is set so that a decisionmaker overweights the likelihood ratio by 1 “deciban” for every one-unit increment in the odds in favor a particular outcome (“1:1” to “2:1” to “3:1” etc.). Accordingly, she overvalues the evidence by a factor of 2 as the odds shift from even money (1:1) to 10:1, and by an amount proportionate to that as the odds grow progressively more lopsided.  I’ve discussed previously why I selected at this formula, which is a tribute to Alan Turing & Jack Good & the pioneering work they did in Bayesian decision theory.

This table illustrates the distorting impact of the CBR factor. It shows how a case consisting of eight “pieces” of evidence–four pro-prosecution and four pro-defense–that ought to result in a “tie” (odds of 1:1 in favor of a prosecutor’s charge) can generate an extremely confident judgment in favor of either that party depending on the order of the trial proof

In the simulation, we can generate 100 cases, each consisting of 4 pieces of “prosecution” evidence—pieces of evidence with likelihood ratios drawn randomly from a uniform distribution of 1.05 to 20—and 4 pieces of “defense” evidence–ones with likelihood ratios drawn randomly from the reciprocal values (0.95 to 0.05) of that same uniform distribution.

The histograms illustrate the nature of the “confidence skew” resulting from the impact of CBR in those 100 cases.  As expected, there are many fewer “close cases” when decisionmaking reflects CBR than there would be if the decisionmaking reflected unbiased Bayesian updating.

The skew exacts a toll on outcome accuracy. The toll, moreover, is asymmetric: if we assume that the prosecution has to establish her case by a probability 0.95 to satisfy the “beyond a reasonable doubt” standard, many more erroneously decided cases will involve false convictions than false acquittals, since only those cases in which equivocation is incorrectly resolved in favor of exaggerated confidence in guilt will result in incorrect decisions.  (Obviously, if these were civil cases tried under a preponderance of the evidence standard, the error rates for false findings of liability and false findings of no liability would be symmetric.)

This is one “run” of 100 cases. Let’s put together a full-blown Monte Carlo simulation (a tribute to the Americans working on the Manhattan project; after all, why should the Bletchley Park codebreakers Turing & Good garner all our admiration) & simulate 1,000 sets of 100 cases so that we can get a more precise sense of the distribution of correctly and incorrectly decided cases given the assumptions built into or coherence-based-reasoning model.

If we do that, we see this:

Obviously, all these numbers are ginned up for purposes of illustration.

We can’t know (or can’t without a lot of guesswork) what the parameters should be in a model like this.

But we can know even without doing that that we ought to have grave doubts about the accuracy, and hence legitimacy, of a legal system that relies on decisionmakers subject to this decisionmaking dynamic.

Are jurors subject to this dynamic?  That’s a question that goes to the external validity of the studies we read for this session.

But assuming that they are, would professional decisionmakers likely do better? That’s a question very worthy of additional study.

Leave a Comment

error: