🤖 AI Summary
To address the limitations of existing RAG models in health misinformation rebuttal—namely, overreliance on单一 evidence sources and weak controllability in generation—this paper proposes a multi-agent collaborative retrieval-augmented framework. The framework employs specialized LLM agents分工 for knowledge retrieval, dynamic evidence fusion (integrating static knowledge bases with real-time authoritative sources), and response refinement, enabling fine-grained process control. Its key contributions are: (i) the first multi-agent RAG architecture specifically designed for health misinformation rebuttal; (ii) a dynamic evidence updating mechanism ensuring timeliness and credibility; and (iii) an end-to-end interpretable generation pipeline. Experiments demonstrate significant improvements over baselines in politeness, relevance, informativeness, and factual accuracy. Ablation studies and human evaluation confirm the necessity of each module and highlight the critical role of response refinement in enhancing user preference alignment.
📝 Abstract
Large language models (LLMs) incorporated with Retrieval-Augmented Generation (RAG) have demonstrated powerful capabilities in generating counterspeech against misinformation. However, current studies rely on limited evidence and offer less control over final outputs. To address these challenges, we propose a Multi-agent Retrieval-Augmented Framework to generate counterspeech against health misinformation, incorporating multiple LLMs to optimize knowledge retrieval, evidence enhancement, and response refinement. Our approach integrates both static and dynamic evidence, ensuring that the generated counterspeech is relevant, well-grounded, and up-to-date. Our method outperforms baseline approaches in politeness, relevance, informativeness, and factual accuracy, demonstrating its effectiveness in generating high-quality counterspeech. To further validate our approach, we conduct ablation studies to verify the necessity of each component in our framework. Furthermore, human evaluations reveal that refinement significantly enhances counterspeech quality and obtains human preference.