Large Language Models in the Abuse Detection Pipeline

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing abuse detection methods, which rely on static models and manual annotations and struggle to handle dynamic, context-sensitive online abusive behaviors. It proposes the first large language model (LLM)-integrated framework spanning the entire lifecycle of abuse detection, systematically encompassing label and feature generation, detection, appeal review, and audit governance. Bridging academic research and industrial practice, the study explores LLMs’ capabilities in contextual reasoning, policy interpretation, and cross-modal understanding. The authors delineate key architectural considerations for each phase, evaluate LLMs’ potential and limitations regarding interpretability, policy alignment, and multimodal fusion, and identify critical challenges—including latency, cost, determinism, adversarial robustness, and fairness—thereby offering a principled direction toward building reliable, accountable, large-scale abuse governance systems.
📝 Abstract
Online abuse has grown increasingly complex, spanning toxic language, harassment, manipulation, and fraudulent behavior. Traditional machine-learning approaches dependent on static classifiers and labor-intensive labeling struggle to keep pace with evolving threat patterns and nuanced policy requirements. Large Language Models introduce new capabilities for contextual reasoning, policy interpretation, explanation generation, and cross-modal understanding, enabling them to support multiple stages of modern safety systems. This survey provides a lifecycle-oriented analysis of how LLMs are being integrated into the Abuse Detection Lifecycle (ADL), which we define across four stages: (I) Label \& Feature Generation, (II) Detection, (III) Review \& Appeals, and (IV) Auditing \& Governance. For each stage, we synthesize emerging research and industry practices, highlight architectural considerations for production deployment, and examine the strengths and limitations of LLM-driven approaches. We conclude by outlining key challenges including latency, cost-efficiency, determinism, adversarial robustness, and fairness and discuss future research directions needed to operationalize LLMs as reliable, accountable components of large-scale abuse-detection and governance systems.
Problem

Research questions and friction points this paper is trying to address.

online abuse
abuse detection
large language models
content moderation
safety systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Abuse Detection Lifecycle
Contextual Reasoning
Policy Interpretation
Cross-modal Understanding
🔎 Similar Papers
No similar papers found.
S
Suraj Kath
Google LLC
S
Sanket Badhe
Google LLC
Preet Shah
Preet Shah
Graduate Student at University of Washington
RoboticsReinforcement LearningControls
A
Ashwin Sampathkumar
Google LLC
S
Shivani Gupta
Google LLC