A Comprehensive Evaluation of LLM Reasoning: From Single-Model to Multi-Agent Paradigms

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic evaluation regarding the effectiveness and cost-accuracy trade-offs of prevailing large language model reasoning paradigms, such as chain-of-thought and multi-agent systems. The authors propose a unified evaluation framework that encompasses direct generation, chain-of-thought, and multi-agent workflows, conducting comparative experiments across multiple closed-domain benchmarks. They introduce role-isolation analysis and cost-accuracy modeling to dissect performance drivers. Additionally, they present MIMeBench, a novel benchmark designed to assess semantic abstraction and contrastive discrimination capabilities through fine-grained semantic dimensions. Their findings reveal that increased reasoning complexity does not necessarily yield performance gains; multi-agent systems exhibit high task dependency, with certain configurations incurring substantial costs for marginal benefits. MIMeBench effectively uncovers nuanced semantic reasoning differences that conventional metrics often overlook.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed as reasoning systems, where reasoning paradigms - such as Chain-of-Thought (CoT) and multi-agent systems (MAS) - play a critical role, yet their relative effectiveness and cost-accuracy trade-offs remain poorly understood. In this work, we conduct a comprehensive and unified evaluation of reasoning paradigms, spanning direct single-model generation, CoT-augmented single-model reasoning, and representative MAS workflows, characterizing their reasoning performance across a diverse suite of closed-form benchmarks. Beyond overall performance, we probe role-specific capability demands in MAS using targeted role isolation analyses, and analyze cost-accuracy trade-offs to identify which MAS workflows offer a favorable balance between cost and accuracy, and which incur prohibitive overhead for marginal gains. We further introduce MIMeBench, a new open-ended benchmark that targets two foundational yet underexplored semantic capabilities - semantic abstraction and contrastive discrimination - thereby providing an alternative evaluation axis beyond closed-form accuracy and enabling fine-grained assessment of semantic competence that is difficult to capture with existing benchmarks. Our results show that increased structural complexity does not consistently lead to improved reasoning performance, with its benefits being highly dependent on the properties and suitability of the reasoning paradigm itself. The codes are released at https://gitcode.com/HIT1920/OpenLLMBench.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
reasoning paradigms
Chain-of-Thought
multi-agent systems
cost-accuracy trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning paradigms
multi-agent systems
cost-accuracy trade-off
semantic abstraction
MIMeBench
🔎 Similar Papers
No similar papers found.
Y
Yapeng Li
Harbin Institute of Technology
J
Jiakuo Yu
Harbin Institute of Technology
Z
Zhixin Liu
Harbin Institute of Technology
X
Xinnan Liu
Harbin Institute of Technology
Jing Yu
Jing Yu
Northwestern University
SustainabilityLife Cycle AnalysisTransportation ManagementOperations Research
S
Songze Li
Harbin Institute of Technology
Tonghua Su
Tonghua Su
Professor of Harbin Institute of Technology
pattern recognitioncharacter recognitionmachine learningsoftware engineering