Taxonomizing Representational Harms using Speech Act Theory

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the conceptual ambiguity and coarse-grained categorization of “representational harm” in generative language systems. Drawing on Austin’s speech act theory, it reconceptualizes representational harm as illocutionary acts that produce harmful perlocutionary effects, thereby refining definitions of stereotyping, denigration, and erasure. It introduces speech act theory to AI fairness research for the first time, establishing the first fine-grained taxonomy of illocutionary acts—surpassing prior high-level classifications. Integrating insights from linguistic anthropology, sociolinguistics, and conceptual case studies, the framework clarifies conceptual boundaries and demonstrates practical utility in both conceptual analysis and the development of empirically verifiable measurement tools. The core innovation lies in using illocutionary acts as a theoretical and methodological bridge, unifying conceptual modeling and empirical measurement of representational harm.

Technology Category

Application Category

📝 Abstract
Representational harms are widely recognized among fairness-related harms caused by generative language systems. However, their definitions are commonly under-specified. We present a framework, grounded in speech act theory (Austin, 1962), that conceptualizes representational harms caused by generative language systems as the perlocutionary effects (i.e., real-world impacts) of particular types of illocutionary acts (i.e., system behaviors). Building on this argument and drawing on relevant literature from linguistic anthropology and sociolinguistics, we provide new definitions stereotyping, demeaning, and erasure. We then use our framework to develop a granular taxonomy of illocutionary acts that cause representational harms, going beyond the high-level taxonomies presented in previous work. We also discuss the ways that our framework and taxonomy can support the development of valid measurement instruments. Finally, we demonstrate the utility of our framework and taxonomy via a case study that engages with recent conceptual debates about what constitutes a representational harm and how such harms should be measured.
Problem

Research questions and friction points this paper is trying to address.

Classifying representational harms in generative language systems
Defining stereotyping, demeaning, and erasure using speech act theory
Developing a taxonomy for measuring representational harms effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework based on speech act theory
New definitions for stereotyping, demeaning, erasure
Granular taxonomy of harmful illocutionary acts
🔎 Similar Papers
No similar papers found.
E
Emily Corvi
Microsoft Research
H
Hannah Washington
Microsoft Research
S
Stefanie Reed
Microsoft Research
C
Chad Atalla
Microsoft Research
Alexandra Chouldechova
Alexandra Chouldechova
Researcher @ MSR NYC FATE
P. Alex Dow
P. Alex Dow
Microsoft
Jean Garcia-Gathright
Jean Garcia-Gathright
Microsoft
responsible AIrecommender systemsmeasurement and evaluation
N
Nicholas Pangakis
Microsoft Research
E
Emily Sheng
Microsoft Research
D
Dan Vann
Microsoft Research
M
Matthew Vogel
Microsoft Research
Hanna Wallach
Hanna Wallach
VP & Distinguished Scientist, Microsoft Research
AI Evaluation & MeasurementResponsible AIComputational Social ScienceMLNLP