🤖 AI Summary
This study addresses the challenge of generating trustworthy incident reports from multi-role, high-noise spoken dialogues in law enforcement settings. We propose the first trust-centered large language model (LLM) framework for this task. Methodologically, it integrates role-aware information extraction with procedural narrative generation, employing dialogue role separation, robust key-event extraction, noise-aware prompt engineering, and structured output constraints to safeguard officer and civilian rights while ensuring regulatory compliance. Our key contribution lies in explicitly modeling accountability, fairness, and transparency as primary LLM generation objectives—rather than as post-hoc attributes. Evaluated on real-world police ASR transcripts, the framework achieves 92% recall for critical report elements and 87% procedural correctness, significantly improving report consistency and auditability. This work establishes a verifiable, deployable technical paradigm for intelligent policing documentation systems.
📝 Abstract
Achieving a delicate balance between fostering trust in law en- forcement and protecting the rights of both officers and civilians continues to emerge as a pressing research and product challenge in the world today. In the pursuit of fairness and transparency, this study presents an innovative AI-driven system designed to generate police report drafts from complex, noisy, and multi-role dialogue data. Our approach intelligently extracts key elements of law enforcement interactions and includes them in the draft, producing structured narratives that are not only high in quality but also reinforce accountability and procedural clarity. This frame- work holds the potential to transform the reporting process, ensur- ing greater oversight, consistency, and fairness in future policing practices. A demonstration video of our system can be accessed at https://drive.google.com/file/d/1kBrsGGR8e3B5xPSblrchRGj- Y-kpCHNO/view?usp=sharing