🤖 AI Summary
This study addresses the limitations of current health applications, which predominantly emphasize metric tracking and goal achievement, often triggering social comparison and performance anxiety while neglecting mechanisms for self-reflection. To counter this, the authors propose KRIYA, an AI-powered health companion centered on co-interpretive interaction. Through features such as Comfort Zone, Detective Mode, and What-If Planning, KRIYA encourages users to explore their health data through curiosity-driven inquiry and imagine alternative future scenarios. Employing a prototype interaction with hypothetical data, semi-structured interviews, and qualitative analysis, the research reveals that when framed within an emotional context, users perceive data engagement as interpretive rather than performative, and develop trust in the AI through transparency. This work challenges the dominant performance-oriented paradigm by advancing a novel approach to AI-health interaction grounded in self-compassion and reflective meaning-making.
📝 Abstract
Most personal wellbeing apps present summative dashboards of health and physical activity metrics, yet many users struggle to translate this information into meaningful understanding. These apps commonly support engagement through goals, reminders, and structured targets, which can reinforce comparison, judgment, and performance anxiety. To explore a complementary approach that prioritizes self-reflection, we design KRIYA, an AI wellbeing companion that supports co-interpretive engagement with personal wellbeing data. KRIYA aims to collaborate with users to explore questions, explanations, and future scenarios through features such as Comfort Zone, Detective Mode, and What-If Planning. We conducted semi-structured interviews with 18 college students interacting with a KRIYA prototype using hypothetical data. Our findings show that through KRIYA interaction, users framed engaging with wellbeing data as interpretation rather than performance, experienced reflection as supportive or pressuring depending on emotional framing, and developed trust through transparency. We discuss design implications for AI companions that support curiosity, self-compassion, and reflective sensemaking of personal health data.