The Journal of Things We Like (Lots)
Select Page
Larry Laudan & Ronald J. Allen, DEADLY DILEMMAS II: BAIL AND CRIME, 85 Chi.-Kent L. Rev. 23 (2010). Ronald J. Allen & Larry Laudan, DEADLY DILEMMAS, 41 Tex. Tech L. Rev. 65  (2008).

The last couple years, I’ve developed a bit of an SSRN-induced brain crush on epistemologist Larry Laudan, who I’ve not met before, but whose recent work ought to be pressing the criminal justice commentariat to re-think a lot of common assumptions when we talk about trade-offs in the criminal justice system between Type I errors (false convictions) and Type II errors (wrongful acquittals or non-convictions of factually guilty persons).

In particular, the work Laudan’s been doing with Ronald J. Allen (Northwestern) is evidence of toil along the same rich vein of material earlier espied by UVA’s Darryl Brown in his important work on cost-benefit analysis in criminal law, a field that also incorporates the controversial Sunstein-Vermeule death penalty paper from a few years back. Here’s a very short introduction to Laudan’s intellectual agenda that he put up entitled “The elementary epistemic arithmetic of criminal justice.” But in this JOTWELL review, I advert your attention to two pieces Laudan wrote with Allen. The first one, “Deadly Dilemmas,” is a sharp short essay written as part of a symposium at Texas Tech. A more recent paper, entitled “Deadly Dilemmas II: Bail and Crime,” extends to the realm of pretrial release the framework of looking at procedural rules and their real-world costs and tradeoffs. By advocating a more restrictive approach to pretrial release, this second paper also suggests a practical and “modest” proposal to our policies across the country in order to bring down the moral costs of so many possibly preventable serious crimes.

The core of their work is to show us with greater precision what we are doing when we consider risk-risk tradeoffs in the rules of evidence and criminal procedure. Many people think these tradeoffs should be done to minimize Type I errors of mistaken punishment. For example, we all teach our students in criminal law about Blackstone’s maxim, ie., that it is purportedly better that N guilty (e.g. 10) persons go free to save an innocent person from wrongful punishment. But as Allen and Laudan carefully show, Blackstone’s maxim alone is radically insufficient as a guide for policy design. While Blackstone’s maxim is consistent with accurate results in any set of 100 cases, it is also consistent with mistakes in 99 out of 100 cases. What? Yes: “imagine that there are 9 wrongful convictions out of every 100 cases that go to trial and that to ensure the number goes no higher the system is structured to generate the requisite 90 wrongful acquittals. In that case, a perfectly Blackstone compliant system generates mistakes in 99 out of 100 cases that go to trial.”

Now it’s true that few people look to Blackstone as the principal architect for our current institutions, but it’s not well recognized what the costs of our commitments to Type I error reduction are. And in this respect, Laudan and Allen try to provide some sense of what those costs are to average citizens, especially in the context of serious crimes (those involving homicide, rape, aggravated battery and armed robbery). Based on some earlier figures, they write that “the chance of being wrongfully convicted of a serious crime over one’s lifetime hovers at most around 0.25% whereas the chance of being the victim of a serious crime over one’s lifetime is somewhere around eighty-three percent… This does not require that one think the two [risks] are equally harmful; it requires only thinking that being a victim of a wrongful conviction is not 332 times as bad as being a victim of a serious crime. Perhaps it is better to be brutally raped or beaten than to be wrongfully convicted of doing so, but we doubt many would think it better to be brutally raped or beaten 332 times rather than wrongly convicted once.”

Even though Laudan and Allen arrive at these numbers with what they regard as conservative estimates, some scholars will disagree with the basis for these numbers. I’m not sure if they are within the correct order of magnitude but they seem plausible enough to start a conversation. And once we’re wearing our social planner hat, trying to figure out how to balance these competing risks, it is indisputably useful to have a good sense of what these competing risks are. Laudan and Allen provide some good reasons for thinking that we should not (legislatively or constitutionally) enshrine rules that focus inexorably on “innocentrism.”

But who are Laudan and Allen writing for? I dare say I think it’s for legal academics (and perhaps journalists) prone to focus on the visible Type I errors. After all, as one friend of mine noted, it’s not as if crime control is a dead topic among policymakers: over the last forty years, we’ve seen significant if not staggering amounts of resources devoted to investigation, enforcement, and incapacitation. And while arguments persist regarding whether these social investments caused or (even) co-incided with the substantial crime drops over the last few decades, there is certainly no public clamoring for heightened criminal procedural protections – especially outside the death penalty context. To be clear, that doesn’t undercut the significance of the intervention I think Allen and Laudan are making — hence my desire to promote their work via JOTWELL. But my sense is that there are at least two areas that could benefit from some discussion or amplification.

First, Allen & Laudan’s number-crunching and guesstimating don’t seem especially sensitive to the distributive patterns of these competing risks of being a victim of a Type I error or a Type II error. Perhaps, in light of the prevalence of intra-racial crime, there’s some story that might justify this silence. After all, if it turns out that the average person of color in the inner-city faces a far greater likelihood of being the victim of a serious crime than being falsely convicted of a serious crime, she might have good reason to see various criminal procedure and evidence rules shift in the direction Allen & Laudan propose. Something like this story seems to animate, for example, the support Professors Dan Kahan and Tracey Meares once gave to the idea of democratizing the rules of criminal procedure. On the other hand, if that story is not true, and in fact the benefits and burdens are not borne with rough equality, then something more needs to be said before we all embrace the direction of these prescriptions.

The other thing that needs further attention in these papers is the nature of the risk associated with the kind of crimes that one might endure as a victim. To my mind, the analysis in both papers is not quite sufficiently granular. For example, at times Allen & Laudan focus on the gravity of a Type II error by noting the probability that the person who benefits from that error will commit “serious crimes” subsequently. The problem is that not all “serious crimes” are equal, a point that they acknowledge but don’t take to heart fully. Their category of serious crime includes homicide and rape, but also armed robbery or aggravated assault. All of these crimes are serious and no one seeks out the chance to be the victim of any of these crimes. But if the comparison is between being false convicted of any of these crimes and being a victim of an armed robbery (which may not cause any physical harm) or an aggravated assault, well, I suspect maybe the suffering associated with being wrongly convicted can be far more lasting and difficult than that associated with being a victim of an armed robbery.

Whether it’s 332 times worse, I’m not sure, but I can well imagine that some would say they would rather have the reputation and experience of being a crime victim than that of being an alleged criminal. In other words, people might sooner suffer an aggravated assault or armed robbery than be falsely convicted of those particular “serious” crimes. Not because those crimes are easy to bear generally, but because the losses from those crimes may be insured and because the experience of those crimes is, from a victim’s perspective, likely to be short. If we are to credit the studies of hedonic adaptation, one can go about one’s life afterward, in many cases, relatively normally. By contrast, a false conviction for assault or armed robbery could be much more lasting in terms of duration, stigma or other hardships (including collateral consequences imposed by the state and the incidental but foreseeable consequences contingently imposed by third parties) placed upon the falsely convicted defendant.

Moreover, and just as a practical matter, persons who are repeatedly victimized by serious crime are likely to become “hardened targets” – they will adopt prevention measures (moving neighborhoods, staying indoors, carrying weapons) that are likely to displace the prospect of crime onto others who might not have been so repeatedly victimized.

To be sure, when looking at rape and homicide, many would properly view being victimized of those crimes as devastating, even if not identical to each other. But being falsely convicted of rape or murder would also be incredibly difficult to bear. So my sense is that the conclusions Laudan and Allen draw would be more powerful if we could isolate the nature of the risks being compared in a more particular way. What Laudan and Allen’s risk analysis should look at is the relevant risks associated with each of those crimes if we are to be persuaded both that the risks we are trading off are remotely commensurable and that we need to contemplate more procedural rule changes, such as the sensible ones they propose in the context of pretrial release.

In any event, Laudan and Allen’s work is a bracing look at the trade-offs we are making but not examining closely enough—every teacher of criminal law and procedure should read and teach these pieces to their students, but they should be the beginning of the conversation, not the end. And to that effect, readers might also be interested in Michael Risinger’s response to these articles, entitled Tragic Consequences of Deadly Dilemmas: A Response to Allen and Laudan.

P.S. The title’s phrase is not really Yogi Berra’s but Larry Laudan’s.

Download PDF
Cite as: Dan Markel, “Legal epistemology is ninety per cent quantitative. The other half is qualitative.” – Yogi Berra, JOTWELL (September 15, 2010) (reviewing Larry Laudan & Ronald J. Allen, DEADLY DILEMMAS II: BAIL AND CRIME, 85 Chi.-Kent L. Rev. 23 (2010). Ronald J. Allen & Larry Laudan, DEADLY DILEMMAS, 41 Tex. Tech L. Rev. 65  (2008).), https://crim.jotwell.com/legal-epistemology-is-ninety-per-cent-quantitative-the-other-half-is-qualitative-yogi-berra/.