🤖 AI Summary
This study addresses a critical gap in existing research, which has predominantly focused on the application of large language models (LLMs) to discrete clinical tasks while overlooking their dynamic role throughout patients’ authentic, longitudinal healthcare journeys. Through a four-week diary study tracking 25 patients’ real-world LLM use, combined with thematic analysis, the work reveals multidimensional support provided by LLMs across behavioral, informational, emotional, and cognitive domains. The findings propose that LLMs function not merely as decision-support tools but as “longitudinal boundary companions” that accompany patients throughout their care trajectories. This reconceptualization reshapes agency, trust, and power dynamics within patient–clinician relationships and offers a novel paradigm for designing human–AI collaborative healthcare systems.
📝 Abstract
Large language models (LLMs) have been increasingly adopted to support patients'healthcare-seeking in recent years. While prior patient-centered studies have examined the capabilities and experience of LLM-based tools in specific health-related tasks such as information-seeking, diagnosis, or decision-supporting, the inherently longitudinal nature of healthcare in real-world practice has been underexplored. This paper presents a four-week diary study with 25 patients to examine LLMs'roles across healthcare-seeking trajectories. Our analysis reveals that patients integrate LLMs not just as simple decision-support tools, but as dynamic companions that scaffold their journey across behavioral, informational, emotional, and cognitive levels. Meanwhile, patients actively assign diverse socio-technical meanings to LLMs, altering the traditional dynamics of agency, trust, and power in patient-provider relationships. Drawing from these findings, we conceptualize future LLMs as a longitudinal boundary companion that continuously mediates between patients and clinicians throughout longitudinal healthcare-seeking trajectories.