The Journal of Things We Like (Lots)
Select Page
Andrew Guthrie Ferguson, Generative Suspicion and the Risks of AI-Assisted Police Reports (July 17, 2024), available at SSRN.

Humans do not enjoy the vital drudgery of paperwork, including writing reports. Increasingly, people are turning to machine learning and artificial intelligence-powered products to produce reports. Students do it. Scientists do it. Doctors might do it. And police are starting to do it too, thanks to technology companies like Axon. One of the most prescient scholars of policing and technology, Andrew Guthrie Ferguson’s recent paper, Generative Suspicion and the Risks of AI-Assisted Police Reports, offers a fascinating overview of AI-generated police reports and the potential impact on criminal practice.

Police reports might seem like dull bureaucratic minutiae. But a police report can shape a person’s fate, from whether and what charges get filed, to the plea deal that is offered, and the sentence a defendant receives. One of the first items in a criminal case for a prosecutor or defense attorney to review, the police report shapes and constrains the narrative. The report defines victims and perpetrators, provides potential impeachment material for trial, and impacts the availability of defenses. The transformation of how police reports are generated is thus important, with potential systemic impacts.

Ferguson discusses the potential ramifications of the advent of Axon’s “Draft One” product, which uses an Open AI GPT-4 Turbo model to generate police reports. Launched in April 2024, Draft One uses a large language model to process the audio captured on police body-worn cameras and fill in the details of a police report. Ferguson’s paper cautions that the AI-generated reports poses the danger of “digital poisoning” of fact-finding “by algorithmically altering the narrative.” Professor Ferguson predicts that the temptations of efficiency, including time and officer-hours savings, will drive wider adoption of AI-generated police reports.

The article begins with an exposition, useful for a non-law enforcement audience, of how Draft One works. This discussion includes valuable screenshots of what the user sees. The figures show the fill-in-the-blanks sections that still require officers to engage, somewhat, with the narrative. Another interesting screenshot shows how the software ensures officers check the story that the machine writes by inserting a nonsense detail to edit out of the narrative.

Just a few years ago, policy framers debated whether officers should have access to their body camera footage before drafting their reports because of the risk of contaminating their recollections. Cameras can capture details the officers never perceived in real time. Reviewing the footage risks officers revising their memory and accounts to fit the video.

Technology has transformed professional norms at a breathtaking pace. In a brief time, we are well past the days of worrying that people will try to conform their narratives and memories to machine data. Ferguson predicts that a future is dawning where machines will draft the narrative first. Machines do the recording and reporting work, from the body camera recording, to the AI models that convert the audio data into police reports. Officers are supposed to then read and edit the story, inserting some details.

Normatively, Ferguson’s paper cautions about the turn to AI-generated police reports. He details three clusters of concerns over the technology and then offers a critique about the impacts of this technology on both the purposes served by police reports and on key points in criminal processing.

Any discussion of the technological limitations of AI usually begins with concerns over the limitations of datasets used to train algorithms, resulting in potential errors, and the lack of algorithmic transparency. Ferguson begins here, with critiques tailored to the context of policing. For example, datasets may not capture the full heterogeneity of policing various places, such as urban versus rural contexts. He argues for greater transparency about how algorithms encode preferred and prohibited language choices.

Ferguson then turns to the risk of transcription errors, which may vary depending on accents and language choices. He also covers other interesting types of errors, such as hallucination errors, which arise when the predictive algorithm makes up facts because of predictive errors in processing patterns.

A third cluster of concerns center around how AI can transform the narrative of what happened, in potentially legally significant ways. For example, the timing of actions may affect their constitutionality, and the nuances of timing may be obscured or even altered by language choices made by the software. The inclusion of AI-generated details using predictive patterns may, for example, cover up lack of probable cause.

The theoretical portion of Ferguson’s paper delves into concerns over how AI-generated police reports can undermine the traditional purpose of such reports and affect criminal processing, from the decision whether to arrest through pretrial detention, plea bargaining or trial, and sentencing. He explains how AI-generated police reports can flatten the narrative, stunt factual development, and generate potential biases. The replacement of a person with a technology as the primary author also poses accountability and authority problems. These problems in turn impact the life cycle of criminal processing because of the important role of the police report in decisions such as pretrial detention, plea bargaining, shaping trial testimony and impeachment, and even sentencing.

The availability of AI-powered reporting may even affect the decision whether to arrest, before the initiation of a criminal case. The very reduction of the costs of human drafting removes a check on police power, Ferguson argues. The pain of doing paperwork is a check on the police decision to arrest, Ferguson observes. Without this internalization of costs, officers have less incentive to exercise their discretion to decline to arrest.

Can so much change stem from the availability of new report-assisting software? Is filling in the details of a police report using AI really much of a substantive deviation from the current formulaic process of report-writing?

As attorneys in the criminal system know from reading numerous, similar-sounding police reports, long before AI or Draft One, there was the simple copy and paste. Police report narratives often sound numbingly similar within each genre of case because of reused boilerplate language with just the date, names, and a few other details changed.

Ferguson acknowledges that a lot of report-writing often is “mostly-fill-in the blanks forms requiring minimal exposition” using drop-down menus. He notes how one police department deploying Draft One learned that it was hard to discern AI-generated reports from reports generated by the usual process because officers used a pastiche of some AI-generated material, and some material generated by longstanding methods. But he also makes a compelling theoretical case that the attribution and labor of authorship matters. AI-assisted report writing removes the authority and responsibilities of initial authorship and attribution, and associated safeguards. This well-argued article thus offers an excellent command of on-the-ground realities while offering valuable cautions about a dawning future with potentially sweeping changes.

Download PDF
Cite as: Mary Fan, AI-Generated Police Reports, JOTWELL (March 4, 2025) (reviewing Andrew Guthrie Ferguson, Generative Suspicion and the Risks of AI-Assisted Police Reports (July 17, 2024), available at SSRN), https://crim.jotwell.com/ai-generated-police-reports/.