🤖 AI Summary
Individuals frequently face daily stressors—such as public speaking—without access to professional or peer support, limiting effective self-regulation and mental health self-care.
Method: This study introduces an immersive social simulation system integrating VR/AR with large language models (LLMs) for personalized, on-demand stress-management training. It pioneers deep LLM integration into immersive social simulations and establishes a design framework balancing ecological validity and psychological safety, explicitly addressing three key challenges: hyperrealism risks, LLM-generated advice credibility, and accessibility.
Contribution/Results: Through a prototype-driven user study—including development of eight interactive stress-mitigation prototypes and semi-structured interviews with 19 participants—the system demonstrates efficacy in bridging the gap in autonomous mental health training. Findings yield empirically grounded, scalable design principles for self-care interventions, advancing human-centered, accessible digital mental health tools.
📝 Abstract
Stress is an inevitable part of day-to-day life yet many find themselves unable to manage it themselves, particularly when professional or peer support are not always readily available. As self-care becomes increasingly vital for mental well-being, this paper explores the potential of social simulation as a safe, virtual environment for practicing stress relief for everyday situations. Leveraging the immersive capabilities of VR, AR, and LLMs, we developed eight interactive prototypes for various everyday stressful scenarios (e.g. public speaking) then conducted prototype-driven semi-structured interviews with 19 participants. We reveal that people currently lack effective means to support themselves through everyday stress and found that social simulation fills a gap for simulating real environments for training mental health practices. We outline key considerations for future development of simulation for self-care, including risks of trauma from hyper-realism, distrust of LLM-recommended timing for mental health recommendations, and the value of accessibility for self-care interventions.