"I followed what felt right, not what I was told": Autonomy, Coaching, and Recognizing Bias Through AI-Mediated Dialogue

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the pervasive yet under-addressed issue of ability-based microaggressions in everyday interactions, for which effective interventions to support bias recognition remain scarce. The authors develop an empirically validated corpus of bias-laden scenarios and an AI-mediated dialogue intervention platform, then conduct a controlled experiment comparing four conditions: bias-focused prompting, inclusivity-focused prompting, unguided dialogue, and passive text reading. Findings indicate that dialogic interventions significantly outperform passive reading in enhancing bias recognition. Both inclusivity-oriented and unguided strategies effectively improve recognition while preserving emotional equilibrium, whereas bias-focused prompts, though increasing discernment, often provoke negative reactions and are frequently rejected by users. The work reveals a critical design tension between intervention efficacy and user acceptance in AI-mediated bias awareness systems, offering a novel paradigm for developing more effective and user-sensitive tools.

Technology Category

Application Category

📝 Abstract
Ableist microaggressions remain pervasive in everyday interactions, yet interventions to help people recognize them are limited. We present an experiment testing how AI-mediated dialogue influences recognition of ableism. 160 participants completed a pre-test, intervention, and a post-test across four conditions: AI nudges toward bias (Bias-Directed), inclusion (Neutral-Directed), unguided dialogue (Self-Directed), and a text-only non-dialogue (Reading). Participants rated scenarios on standardness of social experience and emotional impact; those in dialogue-based conditions also provided qualitative reflections. Quantitative results showed dialogue-based conditions produced stronger recognition than Reading, though trajectories diverged: biased nudges improved differentiation of bias from neutrality but increased overall negativity. Inclusive or no nudges remained more balanced, while Reading participants showed weaker gains and even declines. Qualitative findings revealed biased nudges were often rejected, while inclusive nudges were adopted as scaffolding. We contribute a validated vignette corpus, an AI-mediated intervention platform, and design implications highlighting trade-offs conversational systems face when integrating bias-related nudges.
Problem

Research questions and friction points this paper is trying to address.

ableist microaggressions
bias recognition
AI-mediated dialogue
intervention
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-mediated dialogue
bias recognition
ableism intervention
conversational AI
nudge design
🔎 Similar Papers
No similar papers found.