Utilising Large Language Models for Generating Effective Counter Arguments to Anti-Vaccine Tweets

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vaccine misinformation on social media poses severe threats to public health, yet existing research predominantly focuses on detection rather than real-time, precise refutation. This paper proposes an intervention framework integrating multi-label classification and context-aware generation. First, it constructs a fine-grained, multi-dimensional classifier for anti-vaccine tweets—categorizing claims by type, sentiment polarity, and scientific fallacy class. Second, it leverages label descriptions to guide large language models (LLMs) in generating structured, verifiable, and personalized rebuttals. Innovatively, the framework employs label-augmented prompt engineering and structured fine-tuning to enhance scientific accuracy and persuasive efficacy. Experimental results demonstrate significant improvements over baselines across human evaluation, LLMAutoEval, and BLEU/Rouge metrics. Generated rebuttals achieve high factual accuracy and user acceptance, indicating strong potential for scalable, real-time public health intervention.

Technology Category

Application Category

📝 Abstract
In an era where public health is increasingly influenced by information shared on social media, combatting vaccine skepticism and misinformation has become a critical societal goal. Misleading narratives around vaccination have spread widely, creating barriers to achieving high immunisation rates and undermining trust in health recommendations. While efforts to detect misinformation have made significant progress, the generation of real time counter-arguments tailored to debunk such claims remains an insufficiently explored area. In this work, we explore the capabilities of LLMs to generate sound counter-argument rebuttals to vaccine misinformation. Building on prior research in misinformation debunking, we experiment with various prompting strategies and fine-tuning approaches to optimise counter-argument generation. Additionally, we train classifiers to categorise anti-vaccine tweets into multi-labeled categories such as concerns about vaccine efficacy, side effects, and political influences allowing for more context aware rebuttals. Our evaluation, conducted through human judgment, LLM based assessments, and automatic metrics, reveals strong alignment across these methods. Our findings demonstrate that integrating label descriptions and structured fine-tuning enhances counter-argument effectiveness, offering a promising approach for mitigating vaccine misinformation at scale.
Problem

Research questions and friction points this paper is trying to address.

Generating real-time counter-arguments to anti-vaccine tweets using LLMs
Categorizing vaccine misinformation by efficacy, side effects, and politics
Enhancing counter-argument effectiveness through structured fine-tuning methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs to generate counter-arguments against vaccine misinformation
Fine-tuning models with structured prompts for optimized responses
Classifying anti-vaccine tweets for context-aware rebuttal generation
🔎 Similar Papers
No similar papers found.
U
Utsav Dhanuka
Indian Institute of Technology Kharagpur, West Bengal 721302, India
Soham Poddar
Soham Poddar
Senior Research Fellow, Indian Institute of Technology, Kharagpur
natural language processinggreen ailegal aicomputational social sciencedeep learning
S
Saptarshi Ghosh
Indian Institute of Technology Kharagpur, West Bengal 721302, India