Probing Association Biases in LLM Moderation Over-Sensitivity

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit pervasive over-sensitivity in content moderation, frequently misclassifying benign comments as toxic—not merely due to overtly aggressive lexicons, but owing to latent, systematic thematic association biases: stereotyped semantic links between specific socially sensitive topics (e.g., gender, race, illness) and toxicity. Method: We introduce, for the first time, the Implicit Association Test (IAT) paradigm from cognitive psychology into LLM safety analysis, developing a thematic association framework that integrates free-form generative prompting, quantitative measurement of thematic amplification, and cross-model bias comparison (e.g., GPT-4 Turbo). Contribution/Results: Experiments reveal a counterintuitive trend: while state-of-the-art models reduce false-positive rates, their thematic stereotyping intensifies. This work establishes a novel, interpretable, and empirically grounded attribution pathway for content moderation—moving beyond keyword-based filtering toward bias-aware, theme-sensitive safety optimization.

Technology Category

Application Category

📝 Abstract
Large Language Models are widely used for content moderation but often misclassify benign comments as toxic, leading to over-sensitivity. While previous research attributes this issue primarily to the presence of offensive terms, we reveal a potential cause beyond token level: LLMs exhibit systematic topic biases in their implicit associations. Inspired by cognitive psychology's implicit association tests, we introduce Topic Association Analysis, a semantic-level approach to quantify how LLMs associate certain topics with toxicity. By prompting LLMs to generate free-form scenario imagination for misclassified benign comments and analyzing their topic amplification levels, we find that more advanced models (e.g., GPT-4 Turbo) demonstrate stronger topic stereotype despite lower overall false positive rates. These biases suggest that LLMs do not merely react to explicit, offensive language but rely on learned topic associations, shaping their moderation decisions. Our findings highlight the need for refinement beyond keyword-based filtering, providing insights into the underlying mechanisms driving LLM over-sensitivity.
Problem

Research questions and friction points this paper is trying to address.

LLMs misclassify benign comments as toxic due to over-sensitivity
LLMs exhibit systematic topic biases in toxicity associations
Advanced models show stronger topic stereotypes despite lower false positives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Topic Association Analysis for bias detection
Semantic-level approach beyond token analysis
Scenario imagination to quantify topic biases
🔎 Similar Papers
No similar papers found.