More than Decision Support: Exploring Patients'Longitudinal Usage of Large Language Models in Real-World Healthcare-Seeking Journeys

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in existing research, which has predominantly focused on the application of large language models (LLMs) to discrete clinical tasks while overlooking their dynamic role throughout patients’ authentic, longitudinal healthcare journeys. Through a four-week diary study tracking 25 patients’ real-world LLM use, combined with thematic analysis, the work reveals multidimensional support provided by LLMs across behavioral, informational, emotional, and cognitive domains. The findings propose that LLMs function not merely as decision-support tools but as “longitudinal boundary companions” that accompany patients throughout their care trajectories. This reconceptualization reshapes agency, trust, and power dynamics within patient–clinician relationships and offers a novel paradigm for designing human–AI collaborative healthcare systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have been increasingly adopted to support patients'healthcare-seeking in recent years. While prior patient-centered studies have examined the capabilities and experience of LLM-based tools in specific health-related tasks such as information-seeking, diagnosis, or decision-supporting, the inherently longitudinal nature of healthcare in real-world practice has been underexplored. This paper presents a four-week diary study with 25 patients to examine LLMs'roles across healthcare-seeking trajectories. Our analysis reveals that patients integrate LLMs not just as simple decision-support tools, but as dynamic companions that scaffold their journey across behavioral, informational, emotional, and cognitive levels. Meanwhile, patients actively assign diverse socio-technical meanings to LLMs, altering the traditional dynamics of agency, trust, and power in patient-provider relationships. Drawing from these findings, we conceptualize future LLMs as a longitudinal boundary companion that continuously mediates between patients and clinicians throughout longitudinal healthcare-seeking trajectories.
Problem

Research questions and friction points this paper is trying to address.

large language models
healthcare-seeking
longitudinal
patient experience
socio-technical
Innovation

Methods, ideas, or system contributions that make the work stand out.

longitudinal healthcare
large language models
patient journey
boundary companion
socio-technical mediation
🔎 Similar Papers
No similar papers found.
Yancheng Cao
Yancheng Cao
Tongji University
Human-Computer InteractionHealth
Y
Yishu Ji
Georgia Institute of Technology
C
Chris Yue Fu
University of Washington
S
Sahiti Dharmavaram
Columbia University
M
Meghan Turchioe
Columbia University
N
Natalie C Benda
Columbia University
Lena Mamykina
Lena Mamykina
Associate Professor of Biomedical Informatics, Columbia University
Human Computer InteractionMedical InformaticsComputer-Supported Cooperative Work
Yuling Sun
Yuling Sun
Fudan University
CSCWHCISocial ComputingAgingHealthcare
X
Xuhai "Orson" Xu
Columbia University