🤖 AI Summary
Large language models (LLMs) exhibit pervasive over-sensitivity in content moderation, frequently misclassifying benign comments as toxic—not merely due to overtly aggressive lexicons, but owing to latent, systematic thematic association biases: stereotyped semantic links between specific socially sensitive topics (e.g., gender, race, illness) and toxicity. Method: We introduce, for the first time, the Implicit Association Test (IAT) paradigm from cognitive psychology into LLM safety analysis, developing a thematic association framework that integrates free-form generative prompting, quantitative measurement of thematic amplification, and cross-model bias comparison (e.g., GPT-4 Turbo). Contribution/Results: Experiments reveal a counterintuitive trend: while state-of-the-art models reduce false-positive rates, their thematic stereotyping intensifies. This work establishes a novel, interpretable, and empirically grounded attribution pathway for content moderation—moving beyond keyword-based filtering toward bias-aware, theme-sensitive safety optimization.
📝 Abstract
Large Language Models are widely used for content moderation but often misclassify benign comments as toxic, leading to over-sensitivity. While previous research attributes this issue primarily to the presence of offensive terms, we reveal a potential cause beyond token level: LLMs exhibit systematic topic biases in their implicit associations. Inspired by cognitive psychology's implicit association tests, we introduce Topic Association Analysis, a semantic-level approach to quantify how LLMs associate certain topics with toxicity. By prompting LLMs to generate free-form scenario imagination for misclassified benign comments and analyzing their topic amplification levels, we find that more advanced models (e.g., GPT-4 Turbo) demonstrate stronger topic stereotype despite lower overall false positive rates. These biases suggest that LLMs do not merely react to explicit, offensive language but rely on learned topic associations, shaping their moderation decisions. Our findings highlight the need for refinement beyond keyword-based filtering, providing insights into the underlying mechanisms driving LLM over-sensitivity.