🤖 AI Summary
This study addresses systemic bias in generative AI arising from implicit social stereotypes. We propose the first computable, multidimensional stereotype model, formalizing psychological stereotypes into five core components: target group, associated attributes, relational characteristics, perceived group, and contextual factors—enabling a modular evaluation protocol. Our method integrates theoretical modeling, NLP-driven dataset construction, and semantic relation extraction to support fine-grained detection, attribution, and context-sensitive analysis of stereotypes in AI outputs. The framework bridges social psychological theory with AI evaluation practice, delivering a reusable, interpretable, and extensible bias assessment infrastructure. Empirical results demonstrate substantial improvements in the systematicity and robustness of bias detection for generative AI systems.
📝 Abstract
Societal stereotypes are at the center of a myriad of responsible AI interventions targeted at reducing the generation and propagation of potentially harmful outcomes. While these efforts are much needed, they tend to be fragmented and often address different parts of the issue without taking in a unified or holistic approach about social stereotypes and how they impact various parts of the machine learning pipeline. As a result, it fails to capitalize on the underlying mechanisms that are common across different types of stereotypes, and to anchor on particular aspects that are relevant in certain cases. In this paper, we draw on social psychological research, and build on NLP data and methods, to propose a unified framework to operationalize stereotypes in generative AI evaluations. Our framework identifies key components of stereotypes that are crucial in AI evaluation, including the target group, associated attribute, relationship characteristics, perceiving group, and relevant context. We also provide considerations and recommendations for its responsible use.