🤖 AI Summary
This study addresses inconsistent severity assessments of AI safety violations across culturally diverse populations—a critical challenge in generative AI safety evaluation. We propose a nonparametric response consistency metric to quantify perceptual differences in severity judgments (on Likert-type ordinal scales) across age, gender, and ethnic groups. Methodologically, we integrate nonparametric statistics, ordinal regression, and multi-group comparative modeling to enable fine-grained, cross-demographic characterization of safety judgment bias for the first time, augmented by violation-type–specific interaction analysis. Our contributions are threefold: (1) a culturally sensitive, interpretable responsiveness metric; (2) empirical identification of systematic mechanisms through which demographic variables influence safety ratings; and (3) enhanced fairness and robustness in prioritizing safety concerns, thereby improving the reliability and inclusivity of AI safety assessment in multicultural contexts.
📝 Abstract
Ensuring safety of Generative AI requires a nuanced understanding of pluralistic viewpoints. In this paper, we introduce a novel data-driven approach for calibrating granular ratings in pluralistic datasets. Specifically, we address the challenge of interpreting responses of a diverse population to safety expressed via ordinal scales (e.g., Likert scale). We distill non-parametric responsiveness metrics that quantify the consistency of raters in scoring the varying levels of the severity of safety violations. Using safety evaluation of AI-generated content as a case study, we investigate how raters from different demographic groups (age, gender, ethnicity) use an ordinal scale to express their perception of the severity of violations in a pluralistic safety dataset. We apply our metrics across violation types, demonstrating their utility in extracting nuanced insights that are crucial for developing reliable AI systems in a multi-cultural contexts. We show that our approach offers improved capabilities for prioritizing safety concerns by capturing nuanced viewpoints across different demographic groups, hence improving the reliability of pluralistic data collection and in turn contributing to more robust AI evaluations.