Identity-related Speech Suppression in Generative AI Content Moderation

📅 2024-09-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies and characterizes a pervasive issue of identity-related speech over-suppression in generative AI content moderation. We formally define and quantify this phenomenon, introducing a comprehensive evaluation benchmark covering nine identity categories and encompassing both conventional short texts and generative long-form content—including a novel, creatively curated dataset. Our methodology employs multi-vendor black-box API testing, cross-dataset validation, identity-sensitivity measurement, and fine-grained misclassification attribution analysis. Results demonstrate that all major commercial moderation APIs exhibit significant identity bias: while traditional datasets show over-suppression primarily in politically sensitive expressions, generative content—especially creative depictions of cinematic violence—is disproportionately flagged, with longer texts suffering higher suppression rates. This study establishes a novel analytical framework and provides empirical grounding for understanding expressive inequity in AI-driven content moderation systems.

Technology Category

Application Category

📝 Abstract
Automated content moderation has long been used to help identify and filter undesired user-generated content online. Generative AI systems now use such filters to keep undesired generated content from being created by or shown to users. From classrooms to Hollywood, as generative AI is increasingly used for creative or expressive text generation, whose stories will these technologies allow to be told, and whose will they suppress? In this paper, we define and introduce measures of speech suppression, focusing on speech related to different identity groups incorrectly filtered by a range of content moderation APIs. Using both short-form, user-generated datasets traditional in content moderation and longer generative AI-focused data, including two datasets we introduce in this work, we create a benchmark for measurement of speech suppression for nine identity groups. Across one traditional and four generative AI-focused automated content moderation services tested, we find that identity-related speech is more likely to be incorrectly suppressed than other speech. We find differences in identity-related speech suppression for traditional versus generative AI data, with APIs performing better on generative AI data but worse on longer text instances, and by identity, with identity-specific reasons for incorrect flagging behavior. Overall, we find that on traditional short-form data incorrectly suppressed speech is likely to be political, while for generative AI creative data it is likely to be television violence.
Problem

Research questions and friction points this paper is trying to address.

Measures identity-related speech suppression in AI content moderation
Benchmarks incorrect filtering of speech across nine identity groups
Analyzes differences in suppression between traditional and AI-generated data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measures identity-related speech suppression in AI
Benchmarks speech suppression across nine identity groups
Compares traditional and generative AI content moderation
🔎 Similar Papers
No similar papers found.
O
Oghenefejiro Isaacs Anigboro
Haverford College
C
Charlie M. Crawford
Haverford College
D
Danaë Metaxa
University of Pennsylvania
Sorelle A. Friedler
Sorelle A. Friedler
Shibulal Family Professor of Computer Science, Haverford College
Algorithmic FairnessInterpretabilityMachine LearningAI EthicsTech Policy