CortexDebate: Debating Sparsely and Equally for Multi-Agent Debate

📅 2025-07-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from severe hallucination, limited reasoning capability, and rigid single-path inference. To address these issues, this paper proposes the Multi-Agent Sparse Debating Framework (MSDF). MSDF introduces a modular dynamic modeling (MDM) mechanism inspired by cerebral white matter architecture and constructs a sparse, dynamically evolving debate graph grounded in the McKinsey Trust Equation—thereby mitigating long-context interference and overconfidence-induced debate imbalance. The method integrates multi-agent collaboration, credibility-driven graph sparsification, and dynamic topology optimization. Evaluated across eight benchmark datasets spanning four task categories—including mathematical reasoning and commonsense question answering—MSDF achieves significant improvements: an average +4.2% gain in reasoning accuracy and a 37% reduction in inference steps. Results demonstrate MSDF’s effectiveness, robustness, and strong cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Nowadays, single Large Language Model (LLM) struggles with critical issues such as hallucination and inadequate reasoning abilities. To mitigate these issues, Multi-Agent Debate (MAD) has emerged as an effective strategy, where LLM agents engage in in-depth debates with others on tasks. However, existing MAD methods face two major issues: (a) too lengthy input contexts, which causes LLM agents to get lost in plenty of input information and experiences performance drop; and (b) the overconfidence dilemma, where self-assured LLM agents dominate the debate, leading to low debating effectiveness. To address these limitations, we propose a novel MAD method called "CortexDebate". Inspired by the human brain's tendency to establish a sparse and dynamically optimized network among cortical areas governed by white matter, CortexDebate constructs a sparse debating graph among LLM agents, where each LLM agent only debates with the ones that are helpful to it. To optimize the graph, we propose a module named McKinsey-based Debate Matter (MDM), which acts as an artificial analog to white matter. By integrating the McKinsey Trust Formula, a well-established measure of trustworthiness from sociology, MDM enables credible evaluations that guide graph optimization. The effectiveness of our CortexDebate has been well demonstrated by extensive experimental results across eight datasets from four task types.
Problem

Research questions and friction points this paper is trying to address.

Addresses lengthy input contexts in multi-agent debates
Mitigates overconfidence dilemma among self-assured LLM agents
Optimizes sparse debating graph for effective agent interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse debating graph among LLM agents
McKinsey-based Debate Matter for optimization
Dynamic trust evaluation for debating effectiveness
🔎 Similar Papers
No similar papers found.
Y
Yiliu Sun
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China.
Zicheng Zhao
Zicheng Zhao
Nanjing University of Science and Technology
Knowledge GraphLarge Language ModelFew-shot LearningSemi-Supervised Learning
Sheng Wan
Sheng Wan
Nanjing University of Science and Technology
machine learninghyperspectral image classification
C
Chen Gong
School of Automation and Intelligent Sensing, Shanghai Jiao Tong University, Shanghai, China.