🤖 AI Summary
This work addresses representational harm in large language models (LLMs) arising from implicit societal stereotypes embedded in training data. We propose a fine-grained, interpretable, and quantifiable linguistic metric system grounded in the Social Category and Stereotype Communication (SCSC) theory to detect stereotyped expressions at the sentence level. To our knowledge, this is the first systematic application of the SCSC framework to NLP bias analysis, featuring a linguistically motivated taxonomy—spanning behavioral, trait, and role dimensions—coupled with importance-weighted scoring. Leveraging Llama-3.3-70B and GPT-4, we design an automated evaluation paradigm based on in-context learning (ICL) and few-shot prompting. Experiments demonstrate that our metrics exhibit strong discriminative power and interpretability on stereotyped sentences; GPT-4 and Llama-3.3-70B achieve top performance; and incorporating few-shot examples notably improves accuracy for behavior- and trait-related metrics.
📝 Abstract
Social categories and stereotypes are embedded in language and can introduce data bias into Large Language Models (LLMs). Despite safeguards, these biases often persist in model behavior, potentially leading to representational harm in outputs. While sociolinguistic research provides valuable insights into the formation of stereotypes, NLP approaches for stereotype detection rarely draw on this foundation and often lack objectivity, precision, and interpretability. To fill this gap, in this work we propose a new approach that detects and quantifies the linguistic indicators of stereotypes in a sentence. We derive linguistic indicators from the Social Category and Stereotype Communication (SCSC) framework which indicate strong social category formulation and stereotyping in language, and use them to build a categorization scheme. To automate this approach, we instruct different LLMs using in-context learning to apply the approach to a sentence, where the LLM examines the linguistic properties and provides a basis for a fine-grained assessment. Based on an empirical evaluation of the importance of different linguistic indicators, we learn a scoring function that measures the linguistic indicators of a stereotype. Our annotations of stereotyped sentences show that these indicators are present in these sentences and explain the strength of a stereotype. In terms of model performance, our results show that the models generally perform well in detecting and classifying linguistic indicators of category labels used to denote a category, but sometimes struggle to correctly evaluate the associated behaviors and characteristics. Using more few-shot examples within the prompts, significantly improves performance. Model performance increases with size, as Llama-3.3-70B-Instruct and GPT-4 achieve comparable results that surpass those of Mixtral-8x7B-Instruct, GPT-4-mini and Llama-3.1-8B-Instruct.