Maybe this (like the honeybadger) will turn out to be one of those discoveries on my part that everyone else already knows about, thereby revealing my disturbing remoteness from the zeitgeist, but the underscored sentence struck me as sooooooo hilarious I thought I should take the risk and share it, just in case it really is a hidden gem:
Actually, the paper (Good 1994) is not nearly so esoteric as it looks. Good was a brilliant writer, whose goal was to help curious people understand complicated things–as opposed to the sort of terrible writer whose goal is to be understood as brilliant by people he knows won’t be able to comprehend what he is saying (which usually is nothing very interesting).
I came across this paper while looking for accessible accounts of Turing’s usage of “bans” and “decibans,” a precursor of the Bayes factor, as a useful heuristic for making the concept of “weight of the evidence” tractable (in my case for a paper on the conceit that rules of evidence can be used to correct for foreseeable cognitive biases on the part of factfinders in legal proceedings).
A “ban,” essentially, is a likelihood ratio of 10. That is, we would say that a piece of evidence has a weight of “1 ban” when it made some hypothesis 10x more probable (necessarily in relation to some other hypothesis) than we would have had reason to view it without that evidence.
Turing, in working on decryption at Blatchley Park in WW II, selected the ban as a unit to guide the mechanized search for solutions to codes generated by the German “Enigma” machine. Actually, Turing advocated using “decibans,” which are 1/10 of ban, to assess the probative value of potential matches between sequences of code and plain text that poured out of the “bombe” decoders, electronic proto-computers that rifled through the zillions of combinations formed by the interacting Engima rotors, the settings of which determined the encryption “key” for Enigma-encrypted messages.
Turing judged a deciban– again, 1/10 of a “ban” or a likelihood ratio of 1.25:1 or 5:4 — as pretty much the smallest difference in relative likelihood that a human being was likely to be able to perceive (Good 1979).
That’s an empirical claim about cognition, of course. What evidence did Turing have for it? None, except the vast amount of experience that he and his fellow code-breakers were accumulating as they dedicated themselves to the task of productive deciphering of Enigma messages. That certainly counts for something –but for how much? See the value of having units some system of “evidentiary weight” units here?
Good — a 24-yr old, freshly minted Cambridge mathematician — was part of Turing’s team.
After the war, he wrote prolifically on probability theory, and Bayesian statistics in particular, for decades. He had lots of informative things to say about the concept of “evidentiary weight” (Good 1985). He died in 2009.
Turns out he was really funny too.
Or at least I’d say that this sentence is at least 10 ban’s worth of evidence that he was.
References
Good, I. (1985). Weight of evidence: A brief survey. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley & A. F. M. Smith (Eds.), Bayesian statistics 2: Proceedings of the Second Valencia International Meeting (pp. 249-270). North-Holland: Elsevier.
Good, I. (1994). Causal Tendency, Necessitivity and Sufficientivity: an updated review Patrick Suppes: Scientific Philosopher (pp. 293-315): Springer.