Designing with Culture: How Social Norms Shape Trust and Preference in Health Chatbots

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how cultural framing—particularly the linguistic formulation of social norms—affects community health workers’ (CHWs) trust in AI chatbots within collectivist contexts. Method: A mixed-methods field study was conducted in rural India, experimentally comparing four normative framings—neutral, descriptive, narrative-identity, and prescriptive-authoritative—for identical health content. Contribution/Results: Narrative framing was most preferred but induced uncritical reliance; prescriptive-authoritative framing, though less accepted, fostered more calibrated, reasoned trust. The study introduces “calibrated trust” as a culturally grounded evaluative metric for AI safety, proposing dynamic framing strategies that balance acceptability with cognitive autonomy. It provides the first empirical evidence of how norm formulation differentially modulates human-AI trust mechanisms in collectivist settings, offering both a theoretical framework and actionable design principles for AI-enabled health interventions in the Global South.

Technology Category

Application Category

📝 Abstract
AI-driven chatbots are increasingly used to support community health workers (CHWs) in developing regions, yet little is known about how cultural framings in chatbot design shape trust in collectivist contexts where decisions are rarely made in isolation. This paper examines how CHWs in rural India responded to chatbots that delivered identical health content but varied in one specific cultural lever -- social norms. Through a mixed-methods study with 61 ASHAs who compared four normative framings -- neutral, descriptive, narrative identity, and injunctive authority -- we (1) analyze how framings influence preferences and trust, and (2) compare effects across low- and high-ambiguity scenarios. Results show that narrative framings were most preferred but encouraged uncritical overreliance, while authority framings were least preferred yet supported calibrated trust. We conclude with design recommendations for dynamic framing strategies that adapt to context and argue for calibrated trust -- following correct advice and resisting incorrect advice -- as a critical evaluation metric for safe, culturally-grounded AI.
Problem

Research questions and friction points this paper is trying to address.

Examining cultural framings' impact on health chatbot trust
Comparing normative design strategies in collectivist rural India
Evaluating trust calibration across varying ambiguity scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used narrative and authority framings
Adapted dynamic strategies to context
Measured calibrated trust as metric
A
Arpita Wadhwa
Harvard University
Aditya Vashistha
Aditya Vashistha
Assistant Professor @ Cornell University
HCIICTDAccessibilityResponsible AI
M
Mohit Jain
Microsoft Research