Collaborative Chain-of-Agents for Parametric-Retrieved Knowledge Synergy

πŸ“… 2025-08-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing RAG methods struggle to effectively coordinate large language models’ parametric knowledge with external retrieval knowledge, are vulnerable to noisy retrieval results, and lack explicit modeling of their dynamic complementarity. To address this, we propose the Collaborative Agent Chain (CoCoA) framework, which explicitly models the dynamic synergy between parametric and retrieved knowledge through multi-agent collaboration, conditional knowledge guidance, and long-chain reasoning trajectory synthesis. We further introduce two training paradigms: CoCoA-zero (zero-shot) and CoCoA (fine-tuned). Evaluated on open-domain and multi-hop question answering benchmarks, CoCoA consistently outperforms state-of-the-art RAG approaches, demonstrating significant improvements in generation accuracy and robustness. Ablation studies confirm the efficacy and generalizability of our knowledge coordination mechanism across diverse reasoning scenarios.

Technology Category

Application Category

πŸ“ Abstract
Retrieval-Augmented Generation (RAG) has emerged as a promising framework for enhancing the capabilities of Large Language Models (LLMs), especially in knowledge-intensive tasks. Despite its advantages, current RAG methods often struggle to *fully exploit knowledge during generation*. In particular, the synergy between the model's internal parametric knowledge and external retrieved knowledge remains limited. Retrieved contents may sometimes mislead generation, while certain generated content can guide the model toward more accurate outputs. In this work, we propose Collaborative Chain-of-Agents, a framework designed to enhance explicitly synergy over both parametric and retrieved knowledge. Specifically, we first introduce CoCoA-zero, a multi-agent RAG framework that first performs conditional knowledge induction and then reasons answers. Building on this, we develop CoCoA, a long-chain training strategy that synthesizes extended multi-agent reasoning trajectories from CoCoA-zero to fine-tune the LLM. This strategy enhances the model's capability to explicitly integrate and jointly leverage parametric and retrieved knowledge. Experiments results show that CoCoA-zero and CoCoA achieve superior performance on open-domain and multi-hop QA tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing synergy between internal and external knowledge in RAG
Reducing misleading effects of retrieved content on generation
Improving multi-hop QA with integrated knowledge leveraging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent RAG framework for knowledge synergy
Conditional knowledge induction and reasoning
Long-chain training for enhanced integration
πŸ”Ž Similar Papers
No similar papers found.
Y
Yi Jiang
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology
Sendong Zhao
Sendong Zhao
Harbin Institute of Technology
BioNLPLarge Language Model
J
Jianbo Li
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology
Haochun Wang
Haochun Wang
PhD, Harbin Institute of Technology
NLPLarge Language ModelAI4Science
Lizhe Zhang
Lizhe Zhang
Student at Georgia Institute of Technology
Y
Yan Liu
China Mobile Group Heilongjiang Co.,Ltd
Bin Qin
Bin Qin
Institute of Software Chinese Academy of Sciences
Machine LearningCausal Inference