Responsible AI in criminal justice: LLMs in policing and risks to case progression

πŸ“… 2026-03-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the absence of a systematic risk identification and governance framework for large language models (LLMs) in policing, which may substantially impact case progression. Drawing on legal standards and police operational practices in England and Wales, it systematically maps LLM applications across 15 distinct policing tasks, identifies 17 categories of critical risks, and constructs a β€œtask–risk–impact” mapping framework grounded in over 40 representative case studies. Employing qualitative analysis and case-based methodologies, the research delivers an actionable risk modeling tool and governance foundation to support responsible AI deployment in criminal justice, thereby facilitating the principled and regulated integration of LLMs into law enforcement practice.

Technology Category

Application Category

πŸ“ Abstract
There is growing interest in the use of Large Language Models (LLMs) in policing, but there are potential risks. We have developed a practical approach to identifying risks, grounded in the policing and legal system of England and Wales. We identify 15 policing tasks that could be implemented using LLMs and 17 risks from their use, then illustrate with over 40 examples of impact on case progression. As good practice is agreed, many risks could be reduced. But this requires effort: we need to address these risks in a timely manner and define system wide impacts and benefits.
Problem

Research questions and friction points this paper is trying to address.

Responsible AI
Large Language Models
policing
case progression
AI risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Responsible AI
Policing
Risk Assessment
Case Progression
πŸ”Ž Similar Papers
No similar papers found.
Muffy Calder
Muffy Calder
Professor of Computing Science, University of Glasgow
formal methods
M
Marion Oswald
School of Law, University of Northumbria
E
Elizabeth McClory-Tiarks
School of Law, University of Newcastle
Michele Sevegnani
Michele Sevegnani
Senior Lecturer, School of Computing Science, University of Glasgow
BigraphsFormal methodsModel checkingFormal verification
E
Evdoxia Taka
School of Computing Science, University of Glasgow