🤖 AI Summary
Generative AI may exacerbate clinical symptoms in individuals vulnerable to eating disorders, yet existing safety mechanisms frequently overlook subtle yet clinically significant risk signals.
Method: Drawing on semi-structured interviews with 15 clinical experts, this study employs qualitative abductive analysis to identify seven distinct risk pathways through which generative AI—particularly in health advice and behavioral guidance contexts—can trigger or worsen eating disorder symptoms.
Contribution/Results: We propose the first clinically co-designed taxonomy of generative AI–related eating disorder risks, systematically mapping user interaction patterns to underlying psychopathological mechanisms. This framework bridges a critical gap in AI mental health risk assessment by integrating deep clinical expertise, directly informing the development of risk-detection algorithms and participatory safety governance practices. It provides empirically grounded, actionable guidance for ethically aligned AI design targeting high-sensitivity populations.
📝 Abstract
Generative AI systems may pose serious risks to individuals vulnerable to eating disorders. Existing safeguards tend to overlook subtle but clinically significant cues, leaving many risks unaddressed. To better understand the nature of these risks, we conducted semi-structured interviews with 15 clinicians, researchers, and advocates with expertise in eating disorders. Using abductive qualitative analysis, we developed an expert-guided taxonomy of generative AI risks across seven categories: (1) providing generalized health advice; (2) encouraging disordered behaviors; (3) supporting symptom concealment; (4) creating thinspiration; (5) reinforcing negative self-beliefs; (6) promoting excessive focus on the body; and (7) perpetuating narrow views about eating disorders. Our results demonstrate how certain user interactions with generative AI systems intersect with clinical features of eating disorders in ways that may intensify risk. We discuss implications of our work, including approaches for risk assessment, safeguard design, and participatory evaluation practices with domain experts.