🤖 AI Summary
This study addresses the potential implicit biases of large language models (LLMs) in supporting autistic individuals, particularly the lack of systematic investigation into their assumptions about social cognition. Introducing a novel multi-agent simulation approach grounded in the “double empathy problem” theoretical framework, the work constructs a ChatGPT-based dialogue system to simulate interactions between autistic and non-autistic agents in both collaborative group tasks and structured interviews. Through qualitative content analysis and social-cognitive modeling, the research reveals that the model consistently portrays autistic individuals as socially dependent. Building on these findings, the paper proposes a new human–AI interaction design paradigm that integrates the double empathy perspective, aiming to enhance the fairness and supportive efficacy of future AI systems for autistic users.
📝 Abstract
Large Language Models (LLMs) like ChatGPT offer potential support for autistic people, but this potential requires understanding the implicit perspectives these models might carry, including their biases and assumptions about autism. Moving beyond single-agent prompting, we utilized LLM-based multi-agent systems to investigate complex social scenarios involving autistic and non-autistic agents. In our study, agents engaged in group-task conversations and answered structured interview questions, which we analyzed to examine ChatGPT's biases and how it conceptualizes autism. We found that ChatGPT assumes autistic people are socially dependent, which may affect how it interacts with autistic users or conveys information about autism. To address these challenges, we propose adopting the double empathy problem, which reframes communication breakdowns as a mutual challenge. We describe how future LLMs could address the biases we observed and improve interactions involving autistic people by incorporating the double empathy problem into their design.