🤖 AI Summary
This study examines whether LLM-powered chatbots can more effectively increase short-term HPV vaccination intention among vaccine-hesitant parents than official public health materials.
Method: A multinational randomized controlled trial—conducted with real-world participants across multiple countries—compared an LLM-based conversational system (with output style modulated via prompt engineering) against a strong control condition (authoritative health information). Primary outcomes measured changes in vaccination intention at 15 days and longer-term persistence.
Contribution/Results: The chatbot significantly increased vaccination intention within 15 days (+7.1–10.3 points/100), but this effect was not superior to that of official materials and fully decayed by day 15; in contrast, official materials demonstrated greater durability. This work provides the first empirical validation of the short-term efficacy boundary of LLM-based interventions in real-world, multinational settings, revealing no net advantage over traditional authoritative health communication. It thus offers critical evidence for refining the role and optimization strategies of AI in public health communication.
📝 Abstract
Large language model (LLM) based chatbots show promise in persuasive communication, but existing studies often rely on weak controls or focus on belief change rather than behavioral intentions or outcomes. This pre-registered multi-country (US, Canada, UK) randomized controlled trial involving 930 vaccine-hesitant parents evaluated brief (three-minute) multi-turn conversations with LLM-based chatbots against standard public health messaging approaches for increasing human papillomavirus (HPV) vaccine intentions for their children. Participants were randomly assigned to: (1) a weak control (no message), (2) a strong control reflecting the standard of care (reading official public health materials), or (3 and 4) one of two chatbot conditions. One chatbot was prompted to deliver short, conversational responses, while the other used the model's default output style (longer with bullet points). While chatbot interactions significantly increased self-reported vaccination intent (by 7.1-10.3 points on a 100-point scale) compared to no message, they did not outperform standard public health materials, with the conversational chatbot performing significantly worse. Additionally, while the short-term effects of chatbot interactions faded during a 15-day follow-up, the effects of public health material persisted relative to no message. These findings suggest that while LLMs can effectively shift vaccination intentions in the short-term, their incremental value over existing public health communications is questionable, offering a more tempered view of their persuasive capabilities and highlighting the importance of integrating AI-driven tools alongside, rather than replacing, current public health strategies.