"Just a strange pic": Evaluating 'safety' in GenAI Image safety annotation tasks from diverse annotators' perspectives

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI image safety evaluation frameworks overlook annotators’ multidimensional subjective judgments, limiting their capacity to capture moral, affective, and context-sensitive perceptions of harm. This study analyzes 5,372 open-ended annotator comments using qualitative coding and thematic analysis, augmented by controlled experiments comparing task structures and annotation guidelines. We systematically uncover how annotators identify potential harms through personal experience, sociocultural context, visual artifacts, and prompt-output misalignment—dimensions beyond conventional technical metrics. Key findings: (1) annotators consistently prioritize collective and structural risks over individual ones; (2) effective evaluation design must integrate moral reflection mechanisms, hierarchical harm categorization (e.g., individual/societal/cultural), and context-aware explanatory capabilities. This work provides both theoretical grounding and methodological pathways for developing more interpretable, inclusive, and socially grounded AI safety assessment paradigms.

Technology Category

Application Category

📝 Abstract
Understanding what constitutes safety in AI-generated content is complex. While developers often rely on predefined taxonomies, real-world safety judgments also involve personal, social, and cultural perceptions of harm. This paper examines how annotators evaluate the safety of AI-generated images, focusing on the qualitative reasoning behind their judgments. Analyzing 5,372 open-ended comments, we find that annotators consistently invoke moral, emotional, and contextual reasoning that extends beyond structured safety categories. Many reflect on potential harm to others more than to themselves, grounding their judgments in lived experience, collective risk, and sociocultural awareness. Beyond individual perceptions, we also find that the structure of the task itself -- including annotation guidelines -- shapes how annotators interpret and express harm. Guidelines influence not only which images are flagged, but also the moral judgment behind the justifications. Annotators frequently cite factors such as image quality, visual distortion, and mismatches between prompt and output as contributing to perceived harm dimensions, which are often overlooked in standard evaluation frameworks. Our findings reveal that existing safety pipelines miss critical forms of reasoning that annotators bring to the task. We argue for evaluation designs that scaffold moral reflection, differentiate types of harm, and make space for subjective, context-sensitive interpretations of AI-generated content.
Problem

Research questions and friction points this paper is trying to address.

Understanding diverse annotators' safety judgments in AI-generated images
Examining moral and contextual reasoning beyond structured safety categories
Identifying gaps in current safety evaluation frameworks for GenAI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing annotators' moral and emotional reasoning
Studying impact of task structure on harm interpretation
Proposing context-sensitive safety evaluation frameworks
🔎 Similar Papers
No similar papers found.