"It's not a representation of me": Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services

📅 2025-04-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies systemic accent bias in AI speech synthesis technologies and introduces “digital accent exclusion”—a novel form of digital inequality. Focusing on mainstream platforms including Speechify and ElevenLabs, we employ a mixed-methods approach: cross-accent speech quality evaluation, user surveys and in-depth interviews, and technical performance benchmarking—assessing synthesis fidelity and intelligibility across five non-dominant English accents. Results empirically demonstrate that accent-related performance degradation leads to audible distortion, triggering user identity alienation and service avoidance. Building on these findings, we propose a socio-technical redefinition of AI fairness, formally conceptualizing digital accent exclusion and advancing actionable, inclusive pathways for algorithmic design, accent-diverse data curation, and regulatory policy. This work bridges technical evaluation with sociolinguistic equity, offering both empirical evidence and a normative framework for mitigating accent-based discrimination in voice AI systems.

Technology Category

Application Category

📝 Abstract
Recent advances in artificial intelligence (AI) speech generation and voice cloning technologies have produced naturalistic speech and accurate voice replication, yet their influence on sociotechnical systems across diverse accents and linguistic traits is not fully understood. This study evaluates two synthetic AI voice services (Speechify and ElevenLabs) through a mixed methods approach using surveys and interviews to assess technical performance and uncover how users' lived experiences influence their perceptions of accent variations in these speech technologies. Our findings reveal technical performance disparities across five regional, English-language accents and demonstrate how current speech generation technologies may inadvertently reinforce linguistic privilege and accent-based discrimination, potentially creating new forms of digital exclusion. Overall, our study highlights the need for inclusive design and regulation by providing actionable insights for developers, policymakers, and organizations to ensure equitable and socially responsible AI speech technologies.
Problem

Research questions and friction points this paper is trying to address.

Examining accent bias in AI voice services
Assessing digital exclusion from speech technologies
Evaluating performance disparities across English accents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates AI voice services via mixed methods
Reveals accent-based technical performance disparities
Advocates inclusive design for equitable AI speech
S
Shira Michel
Northeastern University, USA
S
Sufi Kaur
Northeastern University, USA
S
Sarah Elizabeth Gillespie
Northeastern University, USA
J
Jeffrey Gleason
Northeastern University, USA
Christo Wilson
Christo Wilson
Professor, Northeastern University
Consumer ProtectionOnline PrivacyAlgorithm AuditingDark Patterns
A
Avijit Ghosh
Hugging Face and University of Connecticut, USA