π€ AI Summary
This study addresses the absence of a systematic risk identification and governance framework for large language models (LLMs) in policing, which may substantially impact case progression. Drawing on legal standards and police operational practices in England and Wales, it systematically maps LLM applications across 15 distinct policing tasks, identifies 17 categories of critical risks, and constructs a βtaskβriskβimpactβ mapping framework grounded in over 40 representative case studies. Employing qualitative analysis and case-based methodologies, the research delivers an actionable risk modeling tool and governance foundation to support responsible AI deployment in criminal justice, thereby facilitating the principled and regulated integration of LLMs into law enforcement practice.
π Abstract
There is growing interest in the use of Large Language Models (LLMs) in policing, but there are potential risks. We have developed a practical approach to identifying risks, grounded in the policing and legal system of England and Wales. We identify 15 policing tasks that could be implemented using LLMs and 17 risks from their use, then illustrate with over 40 examples of impact on case progression. As good practice is agreed, many risks could be reduced. But this requires effort: we need to address these risks in a timely manner and define system wide impacts and benefits.