"I Cannot Write This Because It Violates Our Content Policy": Understanding Content Moderation Policies and User Experiences in Generative AI Products

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical misalignment between content moderation policies and user experience in generative AI (GAI) products. Employing qualitative content analysis, discourse mining from Reddit communities, and comparative policy analysis across 14 mainstream GAI tools, we systematically identify three core tensions: severe deficits in policy transparency, widespread absence of user appeal and participatory mechanisms, and lack of explainability in moderation decisions. While automated moderation systems effectively block malicious content, frequent false positives, inadequate feedback channels, and ineffective support erode user trust and satisfaction, provoking widespread frustration. Our findings provide empirical grounding for GAI governance and propose a user-centered framework to enhance moderation transparency, appealability, and explainability—thereby bridging a key gap in human-AI collaborative moderation research focused on end-user experience.

Technology Category

Application Category

📝 Abstract
While recent research has focused on developing safeguards for generative AI (GAI) model-level content safety, little is known about how content moderation to prevent malicious content performs for end-users in real-world GAI products. To bridge this gap, we investigated content moderation policies and their enforcement in GAI online tools -- consumer-facing web-based GAI applications. We first analyzed content moderation policies of 14 GAI online tools. While these policies are comprehensive in outlining moderation practices, they usually lack details on practical implementations and are not specific about how users can aid in moderation or appeal moderation decisions. Next, we examined user-experienced content moderation successes and failures through Reddit discussions on GAI online tools. We found that although moderation systems succeeded in blocking malicious generations pervasively, users frequently experienced frustration in failures of both moderation systems and user support after moderation. Based on these findings, we suggest improvements for content moderation policy and user experiences in real-world GAI products.
Problem

Research questions and friction points this paper is trying to address.

Understanding content moderation in generative AI products
Analyzing gaps in policy implementation and user support
Improving user experiences with moderation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed 14 GAI tools' content moderation policies
Examined user experiences via Reddit discussions
Suggested improvements for moderation policies and UX
🔎 Similar Papers
No similar papers found.