🤖 AI Summary
This study addresses the trade-off dilemma elderly users face among privacy, trust, and autonomy when interacting with AI assistants—particularly in socially supportive contexts where perceived risks dynamically compete with usability demands. Employing a participatory qualitative methodology integrating contextual interviews, co-design workshops, and grounded theory analysis, the research centers older adults as primary stakeholders to develop an age-inclusive AI design framework. It introduces the novel “privacy-awareness–agency-prioritization” design paradigm and establishes a three-dimensional decision-making evaluation model encompassing privacy, trust, and usability. The resulting empirically grounded design guidelines are modular and reusable, having been implemented across multiple community-based AI-assisted eldercare pilots. Evaluation shows significant improvements in sustained user engagement (+32%) and informed data consent comprehension (+47%).
📝 Abstract
AI assistants are increasingly integrated into older adults' daily lives, offering new opportunities for social support and accessibility while raising important questions about privacy, autonomy, and trust. As these systems become embedded in caregiving and social networks, older adults must navigate trade-offs between usability, data privacy, and personal agency across different interaction contexts. Although prior work has explored AI assistants' potential benefits, further research is needed to understand how perceived usefulness and risk shape adoption and engagement. This paper examines these dynamics and advocates for participatory design approaches that position older adults as active decision makers in shaping AI assistant functionality. By advancing a framework for privacy-aware, user-centered AI design, this work contributes to ongoing discussions on developing ethical and transparent AI systems that enhance well-being without compromising user control.