Single-Agent LLMs Outperform Multi-Agent Systems on Multi-Hop Reasoning Under Equal Thinking Token Budgets

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses whether multi-agent systems genuinely outperform single-agent systems under constrained computational resources—specifically, a fixed inference token budget—and fair alignment conditions. From an information-theoretic perspective, the authors propose an analytical framework centered on the data processing inequality to rigorously evaluate information efficiency. They conduct systematic comparisons between single- and multi-agent architectures across multiple state-of-the-art models (Qwen3, DeepSeek-R1-Distill-Llama, and Gemini 2.5) on multi-hop reasoning tasks while strictly controlling for token usage. The findings reveal that the apparent advantages of multi-agent systems often stem from uncontrolled computational and contextual effects rather than inherent architectural superiority. Under fixed token budgets, single-agent systems consistently match or exceed multi-agent performance, challenging prevailing assumptions about multi-agent efficacy and exposing evaluation artifacts in current benchmarks.
📝 Abstract
Recent work reports strong performance from multi-agent LLM systems (MAS), but these gains are often confounded by increased test-time computation. When computation is normalized, single-agent systems (SAS) can match or outperform MAS, yet the theoretical basis and evaluation methodology behind this comparison remain unclear. We present an information-theoretic argument, grounded in the Data Processing Inequality, suggesting that under a fixed reasoning-token budget and with perfect context utilization, single-agent systems are more information-efficient. This perspective further predicts that multi-agent systems become competitive when a single agent's effective context utilization is degraded, or when more compute is expended. We test these predictions in a controlled empirical study across three model families (Qwen3, DeepSeek-R1-Distill-Llama, and Gemini 2.5), comparing SAS with multiple MAS architectures under matched budgets. We find that SAS consistently match or outperform MAS on multi-hop reasoning tasks when reasoning tokens are held constant. Beyond aggregate performance, we conduct a detailed diagnostic analysis of system behavior and evaluation methodology. We identify significant artifacts in API-based budget control (particularly in Gemini 2.5) and in standard benchmarks, both of which can inflate apparent gains from MAS. Overall, our results suggest that, for multi-hop reasoning tasks, many reported advantages of multi-agent systems are better explained by unaccounted computation and context effects rather than inherent architectural benefits, and highlight the importance of understanding and explicitly controlling the trade-offs between compute, context, and coordination in agentic systems.
Problem

Research questions and friction points this paper is trying to address.

multi-agent systems
single-agent systems
multi-hop reasoning
reasoning token budget
compute normalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

information efficiency
reasoning token budget
multi-agent LLM systems
Data Processing Inequality
multi-hop reasoning
🔎 Similar Papers
No similar papers found.