Towards User-Centred Design of AI-Assisted Decision-Making in Law Enforcement

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Law enforcement AI systems lack user-centered requirement articulation, hindering real-world deployment. Method: We conducted qualitative research—including in-depth interviews and contextual observations—with frontline law enforcement personnel to systematically identify human-factor requirements. Contribution/Results: We first distill five core human-centered requirements: data processing, explainability, trustworthiness, human-AI collaboration, and system adaptability. We propose a novel “human-in-the-loop” dynamic collaboration paradigm, featuring shared accountability, dual-stage validation (input scrutiny + output verification), and continuous human-guided evolution. Grounded in trustworthy AI principles, we formulate five design principles—scalability, accuracy, defensibility, trustworthiness, and adaptability—and develop a lifecycle-spanning guideline for human supervision and feedback integration. This work establishes a practical, user-centered design framework for AI-augmented decision support in law enforcement.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) has become an important part of our everyday lives, yet user requirements for designing AI-assisted systems in law enforcement remain unclear. To address this gap, we conducted qualitative research on decision-making within a law enforcement agency. Our study aimed to identify limitations of existing practices, explore user requirements and understand the responsibilities that humans expect to undertake in these systems. Participants in our study highlighted the need for a system capable of processing and analysing large volumes of data efficiently to help in crime detection and prevention. Additionally, the system should satisfy requirements for scalability, accuracy, justification, trustworthiness and adaptability to be adopted in this domain. Participants also emphasised the importance of having end users review the input data that might be challenging for AI to interpret, and validate the generated output to ensure the system's accuracy. To keep up with the evolving nature of the law enforcement domain, end users need to help the system adapt to the changes in criminal behaviour and government guidance, and technical experts need to regularly oversee and monitor the system. Furthermore, user-friendly human interaction with the system is essential for its adoption and some of the participants confirmed they would be happy to be in the loop and provide necessary feedback that the system can learn from. Finally, we argue that it is very unlikely that the system will ever achieve full automation due to the dynamic and complex nature of the law enforcement domain.
Problem

Research questions and friction points this paper is trying to address.

Identify limitations of current law enforcement decision-making practices
Explore user requirements for AI-assisted systems in law enforcement
Understand human responsibilities in AI-assisted law enforcement systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI processes large data for crime detection
System ensures scalability, accuracy, and trustworthiness
Human reviews and feedback enhance AI adaptability
🔎 Similar Papers
No similar papers found.
V
Vesna Nowack
Imperial College London, UK
Dalal Alrajeh
Dalal Alrajeh
Associate professor, Department of Computing, Imperial College London
Formal methodssoftware engineeringSymbolic AI
C
Carolina Gutiérrez Muñoz
University of Bath, UK
K
Katie Thomas
University of Bath, UK
W
William Hobson
University of Bath, UK
C
Catherine Hamilton-Giachritsis
University of Bath, UK
P
Patrick Benjamin
University of Oxford, UK
T
Tim D. Grant
Aston University, UK
J
Juliane A. Kloess
University of Edinburgh, UK
J
Jessica Woodhams
University of Birmingham, UK