If Multi-Agent Debate is the Answer, What is the Question?

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multi-agent debate (MAD) research suffers from inadequate evaluation, inconsistent baselines, and limited dataset overlap, undermining claims of generalizability. This work systematically evaluates five state-of-the-art MAD methods across nine diverse benchmarks and finds that they fail to consistently outperform single-agent baselines—such as chain-of-thought and self-consistency—in factual accuracy and reasoning quality. Crucially, we identify model heterogeneity as a previously overlooked yet critical factor enhancing MAD performance. To harness this insight, we propose Heter-MAD: a lightweight framework that dynamically fuses outputs from heterogeneous foundation models using a single LLM, enabling cross-model ensemble and collaborative reasoning without deploying multiple agents. Experiments demonstrate that Heter-MAD achieves an average 4.2% accuracy gain across all nine benchmarks while incurring significantly lower inference overhead than conventional multi-agent approaches. Our work establishes a new paradigm for evaluating and deploying MAD, substantiating its practical efficacy and scalability.

Technology Category

Application Category

📝 Abstract
Multi-agent debate (MAD) has emerged as a promising approach to enhance the factual accuracy and reasoning quality of large language models (LLMs) by engaging multiple agents in iterative discussions during inference. Despite its potential, we argue that current MAD research suffers from critical shortcomings in evaluation practices, including limited dataset overlap and inconsistent baselines, raising significant concerns about generalizability. Correspondingly, this paper presents a systematic evaluation of five representative MAD methods across nine benchmarks using four foundational models. Surprisingly, our findings reveal that MAD methods fail to reliably outperform simple single-agent baselines such as Chain-of-Thought and Self-Consistency, even when consuming additional inference-time computation. From our analysis, we found that model heterogeneity can significantly improve MAD frameworks. We propose Heter-MAD enabling a single LLM agent to access the output from heterogeneous foundation models, which boosts the performance of current MAD frameworks. Finally, we outline potential directions for advancing MAD, aiming to spark a broader conversation and inspire future work in this area.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multi-agent debate effectiveness
Addressing shortcomings in current MAD research
Improving MAD frameworks with model heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent Debate enhances LLMs
Heter-MAD improves MAD frameworks
Systematic evaluation across benchmarks
🔎 Similar Papers
No similar papers found.