"I Hadn't Thought About That": Creators of Human-like AI Weigh in on Ethics And Neurodivergence

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses ethical blind spots and neurotypical paradigm biases in humanoid AI development—particularly concerning autism and neurodiversity—and reveals how these implicitly reproduce dehumanizing historical patterns while distorting AI-mediated social norms. Through in-depth interviews with 24 AI developers and critical discourse analysis grounded in disability studies theory, the research systematically identifies three pervasive neurodiscriminatory assumption traps for the first time. It further finds that nearly 100% of participants were unaware of their AI designs’ structural influence on user communication norms. Methodologically, the study bridges critical disability theory and AI ethics. Its primary contribution is a novel neurodiversity-centered ethical reflection framework that shifts design priorities from anthropomorphism toward *cognitive pluralism*. The framework provides actionable, inclusive R&D pathways—demonstrably enhancing accessibility and ethical alignment for neurodivergent users.

Technology Category

Application Category

📝 Abstract
Human-like AI agents such as robots and chatbots are becoming increasingly popular, but they present a variety of ethical concerns. The first concern is in how we define humanness, and how our definition impacts communities historically dehumanized by scientific research. Autistic people in particular have been dehumanized by being compared to robots, making it even more important to ensure this marginalization is not reproduced by AI that may promote neuronormative social behaviors. Second, the ubiquitous use of these agents raises concerns surrounding model biases and accessibility. In our work, we investigate the experiences of the people who build and design these technologies to gain insights into their understanding and acceptance of neurodivergence, and the challenges in making their work more accessible to users with diverse needs. Even though neurodivergent individuals are often marginalized for their unique communication styles, nearly all participants overlooked the conclusions their end-users and other AI system makers may draw about communication norms from the implementation and interpretation of humanness applied in participants' work. This highlights a major gap in their broader ethical considerations, compounded by some participants' neuronormative assumptions about the behaviors and traits that distinguish"humans"from"bots"and the replication of these assumptions in their work. We examine the impact this may have on autism inclusion in society and provide recommendations for additional systemic changes towards more ethical research directions.
Problem

Research questions and friction points this paper is trying to address.

Ethical concerns in defining humanness for human-like AI agents
Model biases and accessibility issues in AI agent design
Neurodivergence inclusion gaps in AI creators' ethical considerations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigating creators' understanding of neurodivergence in AI
Addressing neuronormative biases in human-like AI design
Recommending systemic changes for ethical AI research
🔎 Similar Papers
No similar papers found.
Naba Rizvi
Naba Rizvi
PhD Student, UCSD
multimodal AINLP
T
Taggert Smith
University of California, San Diego
T
Tanvi Vidyala
University of California, San Diego
M
Mya Bolds
University of California, San Diego
H
Harper Strickland
University of California, San Diego
Andrew Begel
Andrew Begel
Associate Professor, Carnegie Mellon University
accessibilityneurodiversitysoftware engineeringhuman aspectsAI
R
Rua Williams
Purdue University
I
Imani Munyaka
University of California, San Diego