🤖 AI Summary
This study addresses the lack of systematic evaluation regarding the effectiveness and cost-accuracy trade-offs of prevailing large language model reasoning paradigms, such as chain-of-thought and multi-agent systems. The authors propose a unified evaluation framework that encompasses direct generation, chain-of-thought, and multi-agent workflows, conducting comparative experiments across multiple closed-domain benchmarks. They introduce role-isolation analysis and cost-accuracy modeling to dissect performance drivers. Additionally, they present MIMeBench, a novel benchmark designed to assess semantic abstraction and contrastive discrimination capabilities through fine-grained semantic dimensions. Their findings reveal that increased reasoning complexity does not necessarily yield performance gains; multi-agent systems exhibit high task dependency, with certain configurations incurring substantial costs for marginal benefits. MIMeBench effectively uncovers nuanced semantic reasoning differences that conventional metrics often overlook.
📝 Abstract
Large Language Models (LLMs) are increasingly deployed as reasoning systems, where reasoning paradigms - such as Chain-of-Thought (CoT) and multi-agent systems (MAS) - play a critical role, yet their relative effectiveness and cost-accuracy trade-offs remain poorly understood. In this work, we conduct a comprehensive and unified evaluation of reasoning paradigms, spanning direct single-model generation, CoT-augmented single-model reasoning, and representative MAS workflows, characterizing their reasoning performance across a diverse suite of closed-form benchmarks. Beyond overall performance, we probe role-specific capability demands in MAS using targeted role isolation analyses, and analyze cost-accuracy trade-offs to identify which MAS workflows offer a favorable balance between cost and accuracy, and which incur prohibitive overhead for marginal gains. We further introduce MIMeBench, a new open-ended benchmark that targets two foundational yet underexplored semantic capabilities - semantic abstraction and contrastive discrimination - thereby providing an alternative evaluation axis beyond closed-form accuracy and enabling fine-grained assessment of semantic competence that is difficult to capture with existing benchmarks. Our results show that increased structural complexity does not consistently lead to improved reasoning performance, with its benefits being highly dependent on the properties and suitability of the reasoning paradigm itself. The codes are released at https://gitcode.com/HIT1920/OpenLLMBench.