Conversations with AI Chatbots Increase Short-Term Vaccine Intentions But Do Not Outperform Standard Public Health Messaging

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study examines whether LLM-powered chatbots can more effectively increase short-term HPV vaccination intention among vaccine-hesitant parents than official public health materials. Method: A multinational randomized controlled trial—conducted with real-world participants across multiple countries—compared an LLM-based conversational system (with output style modulated via prompt engineering) against a strong control condition (authoritative health information). Primary outcomes measured changes in vaccination intention at 15 days and longer-term persistence. Contribution/Results: The chatbot significantly increased vaccination intention within 15 days (+7.1–10.3 points/100), but this effect was not superior to that of official materials and fully decayed by day 15; in contrast, official materials demonstrated greater durability. This work provides the first empirical validation of the short-term efficacy boundary of LLM-based interventions in real-world, multinational settings, revealing no net advantage over traditional authoritative health communication. It thus offers critical evidence for refining the role and optimization strategies of AI in public health communication.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) based chatbots show promise in persuasive communication, but existing studies often rely on weak controls or focus on belief change rather than behavioral intentions or outcomes. This pre-registered multi-country (US, Canada, UK) randomized controlled trial involving 930 vaccine-hesitant parents evaluated brief (three-minute) multi-turn conversations with LLM-based chatbots against standard public health messaging approaches for increasing human papillomavirus (HPV) vaccine intentions for their children. Participants were randomly assigned to: (1) a weak control (no message), (2) a strong control reflecting the standard of care (reading official public health materials), or (3 and 4) one of two chatbot conditions. One chatbot was prompted to deliver short, conversational responses, while the other used the model's default output style (longer with bullet points). While chatbot interactions significantly increased self-reported vaccination intent (by 7.1-10.3 points on a 100-point scale) compared to no message, they did not outperform standard public health materials, with the conversational chatbot performing significantly worse. Additionally, while the short-term effects of chatbot interactions faded during a 15-day follow-up, the effects of public health material persisted relative to no message. These findings suggest that while LLMs can effectively shift vaccination intentions in the short-term, their incremental value over existing public health communications is questionable, offering a more tempered view of their persuasive capabilities and highlighting the importance of integrating AI-driven tools alongside, rather than replacing, current public health strategies.
Problem

Research questions and friction points this paper is trying to address.

Evaluating chatbot effectiveness in increasing HPV vaccine intentions
Comparing chatbot messaging to standard public health materials
Assessing short-term and long-term effects of chatbot persuasion
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based chatbots for persuasive communication
Multi-country randomized controlled trial design
Comparison with standard public health messaging
🔎 Similar Papers
No similar papers found.
N
Neil K. R. Sehgal
Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
Sunny Rai
Sunny Rai
University of Pennsylvania
LLMsValue AlignmentDigital Mental HealthCreative Text ProcessingAI for Health
Manuel Tonneau
Manuel Tonneau
University of Oxford, World Bank, New York University
Computational Social ScienceNatural Language ProcessingOnline Harms
A
Anish K. Agarwal
Penn Medicine Center for Health Care Transformation and Innovation, University of Pennsylvania, Philadelphia, PA, USA; Department of Emergency Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
J
Joseph Cappella
Annenberg School of Communication, University of Pennsylvania, Philadelphia, PA, USA
M
Melanie L Kornides
School of Nursing, University of Pennsylvania, Philadelphia, PA, USA
Lyle Ungar
Lyle Ungar
University of Pennsylvania
machine learningcomputational linguisticscomputational social science
Sharath Chandra Guntuku
Sharath Chandra Guntuku
University of Pennsylvania
Digital HealthComputational PsychologySocial ListeningApplied Machine Learning