🤖 AI Summary
Amid population aging and shrinking care networks, patients lacking decision-making capacity face highly subjective and high-risk intergenerational surrogate decision-making in Advance Care Planning (ACP).
Method: This study introduces the “AI Individualized Advocate” paradigm, emphasizing human–AI co-constructed mutual intelligibility—not unidirectional command execution—through experiential prototyping, participatory workshops, and qualitative behavioral analysis involving 15 participants.
Contribution/Results: We define a two-dimensional design space balancing surrogate autonomy and human control. From empirical findings, we derive seven key design principles for AI agents in ACP contexts. Critically, we provide the first empirical evidence that users willingly train AI to model their emotions and values, yielding stable preference representations. This work establishes both a theoretical framework and actionable implementation pathway for trustworthy, value-aligned medical AI agents.
📝 Abstract
Serious illness can deprive patients of the capacity to speak for themselves. As populations age and caregiver networks shrink, the need for reliable support in Advance Care Planning (ACP) grows. To probe this fraught design space of using proxy agents for high-risk, high-subjectivity decisions, we built an experience prototype (acpagent{}) and asked 15 participants in 4 workshops to train it to be their personal proxy in ACP decisions. We analysed their coping strategies and feature requests and mapped the results onto axes of agent autonomy and human control. Our findings argue for a potential new role of AI in ACP where agents act as personal advocates for individuals, building mutual intelligibility over time. We conclude with design recommendations to balance the risks and benefits of such an agent.