The Journal of Things We Like (Lots)
Select Page
Elizabeth E. Joh, Police Technology Experiments, 125 Colum. L. Rev. F. 1 (2025).

Data-driven policing technologies. By now, you can probably name a few off the top of your head. Facial recognition technology, GPS location monitoring devices, Automated License Plate Readers, ShotSpotter, and of course, predictive policing software. All are common examples of surveillance tools used by police which rely on algorithms to process large amounts of data. But what are these tools? Traditionally, we understand them as “technology.” In her recent article, Police Technology Experiments, Elizabeth Joh considers whether and how legal engagement with these tools might change if we conceptualize these technologies as “experiments.” Because her novel framework begs important questions at the intersection of law, technology, and science in contemporary society, this is a must-read essay for legal scholars interested in policing, criminal law, and beyond.

Joh begins by explaining the current legal and social framework in which police technologies are evaluated. She defines police surveillance technologies as tools that make use of vast amounts of data and utilize algorithmic tools to sort, classify, analyze and produce inferences from that information for criminal investigation. These tools, she explains, exist within a scant regulatory environment. Investigative police surveillance does not trigger many (if any) Fourth Amendment restrictions. Further, data-driven policing is an increasingly dominant model of policing across the United States. Whether due to federal funding support and/or marketing and incentives from proprietary vendors, this model is expanding. Despite standout U.S. Supreme Court decisions like Carpenter v. United States and Riley v. California, very little exists in the way of comprehensive regulation on police departments’ use of these technologies.

In light of this scant regulatory environment, Joh proposes conceptualizing (and thus approaching) police surveillance technologies as “technology experiments on human subjects.” As experiments, these tools would be evaluated for their adherence to “scientific method,” with an emphasis on “well-designed testing.” She argues that “the unproven uses of an automated system by the police for surveillance and intervention on human subjects” would reorient our focus toward ethical considerations because they are tested on people and communities akin to biomedical and behavioral research. Such research is guided by principles of “respect for persons, beneficence, and justice.”

Joh contends that this change in framework could result in different questions and practices in law. First, it would require a working hypothesis that government officials would evaluate in advance of experimentation. Joh looks to three examples where police deploy information technology to demonstrate the lack of such a practice: the Chicago Police Department’s adoption of an algorithmic risk assessment model to identify individuals who have a high risk of perpetrating or being the victim of gun violence (its “Strategic Subject List”); the Chicago Police Department’s adoption of acoustic gunshot technology (ShotSpotter) to assist in the deployment of police officers; and the Los Angeles Police Department’s program to identify high-risk individuals for extra police attention via an algorithmic system (aka Operation LASER—Los Angeles Strategic Extraction and Restoration). In each instance, plausible hypotheses about the technology’s use—whether it is aimed to reduce violence, gun crimes, or bias in policing—would have been clearly contestable in advance of implementation. Joh points to the history of policing in Chicago and Los Angeles, along with Inspector General reports on the programs in each jurisdiction, to support this point. Moreover, evidence of each program’s failure to meet these goals after implementation would be obvious and pertinent to whether such programs should continue in a jurisdiction. Further, she contends that ethical considerations would be more prominent. The fact that these tools were deployed in areas largely populated by Black and Brown communities makes clear the unequal distribution of research benefits and burdens from an ethical point of view.

Ultimately, Joh contends, this framework makes visible and important various aspects of police surveillance technology’s expansion that are considered distinct from either technical or Fourth Amendment critiques. It is this last point that makes her essay so intriguing. Through it, Joh begs an important question about the blindspots that arise when a legal practice is considered a “technology” versus “science.” Characterizing police surveillance technologies as “experiments” demands that we take more seriously scientific standards which may be easily overlooked when the tools are characterized as a technology in criminal law.

Now, I am not convinced that this bioethical framework would lead to different practices among police departments, particularly for Black and Brown communities. We have a long history of using science as a tool of subordination alongside law. Indeed, bioethical regulations often exist because of abuses in scientific study concentrated among marginalized communities. Yet whether Joh’s framework actually leads to different outcomes among police departments is only part of the point. Joh’s essay underscores that we can frame information technology in criminal law as both science and technology. What kinds of issues arise the more we hew in one direction or the other is an important question for policing scholars, criminal law scholars, and beyond to consider going forward. This essay does a wonderful job bringing that question to the fore.

Download PDF
Cite as: Jessica M. Eaglin, Science or Technology? The Regulation of Police Surveillance Tools, JOTWELL (November 13, 2025) (reviewing Elizabeth E. Joh, Police Technology Experiments, 125 Colum. L. Rev. F. 1 (2025)), https://crim.jotwell.com/science-or-technology-the-regulation-of-police-surveillance-tools/.