π€ AI Summary
This study addresses the lack of systematic understanding and classification of psychological risks associated with AI conversational agents. We propose the first taxonomy grounded in usersβ lived experience. Through a survey of 283 individuals with lived mental health experience and multiple rounds of expert participatory workshops, we employed qualitative coding, thematic modeling, and iterative development and validation of contextualized vignettes to construct a three-dimensional, structured taxonomy comprising 19 AI behavioral patterns, 21 adverse psychological impacts, and 15 user contextual categories. Our contributions are twofold: (1) establishing a user-centered, psychology-specific risk classification paradigm; and (2) introducing a dynamic, multi-path vignette analysis framework that explicitly links AI behaviors to psychological outcomes within individual contextual factors. The resulting taxonomy yields actionable design guidelines for developers, researchers, and policymakers to proactively mitigate psychological harms in AI dialogue systems.
π Abstract
Recent gains in popularity of AI conversational agents have led to their increased use for improving productivity and supporting well-being. While previous research has aimed to understand the risks associated with interactions with AI conversational agents, these studies often fall short in capturing the lived experiences of individuals. Additionally, psychological risks have often been presented as a sub-category within broader AI-related risks in past taxonomy works, leading to under-representation of the impact of psychological risks of AI use. To address these challenges, our work presents a novel risk taxonomy focusing on psychological risks of using AI gathered through the lived experiences of individuals. We employed a mixed-method approach, involving a comprehensive survey with 283 people with lived mental health experience and workshops involving experts with lived experience to develop a psychological risk taxonomy. Our taxonomy features 19 AI behaviors, 21 negative psychological impacts, and 15 contexts related to individuals. Additionally, we propose a novel multi-path vignette-based framework for understanding the complex interplay between AI behaviors, psychological impacts, and individual user contexts. Finally, based on the feedback obtained from the workshop sessions, we present design recommendations for developing safer and more robust AI agents. Our work offers an in-depth understanding of the psychological risks associated with AI conversational agents and provides actionable recommendations for policymakers, researchers, and developers.