🤖 AI Summary
This study investigates how the generative AI video model Sora 2 represents depression and examines whether differences arise between outputs generated via consumer-facing applications versus developer APIs. By prompting both interfaces with “Depression” to produce 50 videos each, the authors employed mixed methods—manual coding of narrative structure, environmental context, and character states, alongside computational analyses of visual aesthetics, audio features, semantic content, and temporal dynamics. Results reveal a pronounced “recovery bias” in App-generated videos: 78% depicted a narrative arc progressing from depression to recovery (versus only 14% in API outputs), accompanied by increasing brightness and motion over time. Both modalities relied heavily on a narrow set of cultural symbols (e.g., hoodies, rain, windows) and predominantly featured solitary individuals aged 20–30, with gender representation varying by access method. This work provides the first evidence that platform access tier systematically shapes AI-generated mental health representations.
📝 Abstract
Generative video models are increasingly capable of producing complex depictions of mental health experiences, yet little is known about how these systems represent conditions like depression. This study characterizes how OpenAI's Sora 2 generative video model depicts depression and examines whether depictions differ between the consumer App and developer API access points. We generated 100 videos using the single-word prompt "Depression" across two access points: the consumer App (n=50) and developer API (n=50). Two trained coders independently coded narrative structure, visual environments, objects, figure demographics, and figure states. Computational features across visual aesthetics, audio, semantic content, and temporal dynamics were extracted and compared between modalities. App-generated videos exhibited a pronounced recovery bias: 78% (39/50) featured narrative arcs progressing from depressive states toward resolution, compared with 14% (7/50) of API outputs. App videos brightened over time (slope = 2.90 brightness units/second vs. -0.18 for API; d = 1.59, q < .001) and contained three times more motion (d = 2.07, q < .001). Across both modalities, videos converged on a narrow visual vocabulary and featured recurring objects including hoodies (n=194), windows (n=148), and rain (n=83). Figures were predominantly young adults (88% aged 20-30) and nearly always alone (98%). Gender varied by access point: App outputs skewed male (68%), API outputs skewed female (59%). Sora 2 does not invent new visual grammars for depression but compresses and recombines cultural iconographies, while platform-level constraints substantially shape which narratives reach users. Clinicians should be aware that AI-generated mental health video content reflects training data and platform design rather than clinical knowledge, and that patients may encounter such content during vulnerable periods.