🤖 AI Summary
This paper critically examines how AI-driven self-tracking technologies (e.g., Oura Ring)—through automated data collection and large language model (LLM)-generated insights—undermine user autonomy and reflective capacity. Grounded in the novel application of “AI solutionism” theory to digital health, it advances the central thesis that “automation erodes agency.” Employing a mixed-methods approach—including HCI experiments, phenomenological analysis of technology experience, and empirical observation of wearable device usage—within a critical technical practice framework, the study identifies mechanisms by which algorithmic health feedback diminishes cognitive engagement and impedes sustained behavioral change. Key contributions include: (1) mapping systematic pathways through which automation attenuates agentic capacities; (2) proposing human-centered design strategies to recalibrate technological empowerment boundaries; and (3) formulating ethically grounded design principles and an intervention framework for responsible AI-enabled health applications. (149 words)
📝 Abstract
Self-tracking technologies and wearables automate the process of data collection and insight generation with the support of artificial intelligence systems, with many emerging studies exploring ways to evolve these features further through large-language models (LLMs). This is done with the intent to reduce capture burden and the cognitive stress of health-based decision making, but studies neglect to consider how automation has stymied the agency and independent reflection of users of self-tracking interventions. In this position paper, we explore the consequences of automation in self-tracking by relating it to our experiences with investigating the Oura Ring, a sleep wearable, and navigate potential remedies.