From Symptoms to Systems: An Expert-Guided Approach to Understanding Risks of Generative AI for Eating Disorders

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative AI may exacerbate clinical symptoms in individuals vulnerable to eating disorders, yet existing safety mechanisms frequently overlook subtle yet clinically significant risk signals. Method: Drawing on semi-structured interviews with 15 clinical experts, this study employs qualitative abductive analysis to identify seven distinct risk pathways through which generative AI—particularly in health advice and behavioral guidance contexts—can trigger or worsen eating disorder symptoms. Contribution/Results: We propose the first clinically co-designed taxonomy of generative AI–related eating disorder risks, systematically mapping user interaction patterns to underlying psychopathological mechanisms. This framework bridges a critical gap in AI mental health risk assessment by integrating deep clinical expertise, directly informing the development of risk-detection algorithms and participatory safety governance practices. It provides empirically grounded, actionable guidance for ethically aligned AI design targeting high-sensitivity populations.

Technology Category

Application Category

📝 Abstract
Generative AI systems may pose serious risks to individuals vulnerable to eating disorders. Existing safeguards tend to overlook subtle but clinically significant cues, leaving many risks unaddressed. To better understand the nature of these risks, we conducted semi-structured interviews with 15 clinicians, researchers, and advocates with expertise in eating disorders. Using abductive qualitative analysis, we developed an expert-guided taxonomy of generative AI risks across seven categories: (1) providing generalized health advice; (2) encouraging disordered behaviors; (3) supporting symptom concealment; (4) creating thinspiration; (5) reinforcing negative self-beliefs; (6) promoting excessive focus on the body; and (7) perpetuating narrow views about eating disorders. Our results demonstrate how certain user interactions with generative AI systems intersect with clinical features of eating disorders in ways that may intensify risk. We discuss implications of our work, including approaches for risk assessment, safeguard design, and participatory evaluation practices with domain experts.
Problem

Research questions and friction points this paper is trying to address.

Identifies risks of generative AI for eating disorder vulnerabilities
Develops expert-guided taxonomy of seven risk categories
Proposes approaches for risk assessment and safeguard design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert-guided taxonomy developed via abductive qualitative analysis
Semi-structured interviews with clinicians and domain experts
Risk assessment and safeguard design informed by expert insights
🔎 Similar Papers
No similar papers found.
A
Amy Winecoff
Center for Democracy & Technology, USA
Kevin Klyman
Kevin Klyman
Stanford, Harvard
Foundation ModelsAI RegulationGeopolitics