🤖 AI Summary
This study addresses the tension between clinical guideline adherence and user expectations in low-resource settings by examining the low acceptance of non-routine medical recommendations—such as avoiding antibiotics, antidiarrheals, or injections—delivered by AI-powered chatbots to urban Indian residents. Through a mixed-methods approach involving contextualized simulations with 200 participants, the research reveals a critical conflict between the accuracy of AI-generated advice and its perceived acceptability when it diverges from users’ experiential health beliefs. To bridge this gap, the work proposes “context-aware prompting” as a novel expectation-alignment mechanism. Findings demonstrate that this design significantly increases user acceptance of guideline-concordant yet unconventional recommendations, offering an innovative pathway toward equitable and effective medical AI systems tailored for the Global South.
📝 Abstract
When medical chatbots provide advice that conflicts with users' lived care experiences, users are left to interpret, negotiate, and evaluate the legitimacy of that guidance. In India, the widespread overuse of antibiotics, antidiarrheals, and injections has shifted patient expectations away from the guideline-aligned advice that chatbots are trained to provide. We present a mixed-methods, vignette-based study with 200 urban Indian adults examining preferences for and against guideline-aligned, norm-divergent advice in chatbot transcripts. We find that a majority of users reject such advice, drawing on diverse rationales grounded in their lived expectations. Through the design and introduction of context-aware nudges, we support expectation alignment that shifts preferences towards transcripts containing guideline-aligned advice. In doing so, we surface key tensions in the equitable design of medical chatbots in the Global South.