🤖 AI Summary
Existing benchmarks inadequately evaluate large language models’ (LLMs) capabilities in collaborative, multi-stage reasoning and autonomous optimization under complex real-world scenarios.
Method: We introduce MSCoRe—the first comprehensive benchmark for multi-stage collaborative reasoning—spanning automotive, pharmaceutical, electronics, and energy domains, comprising 126,696 high-quality domain-specific question-answer pairs. MSCoRe employs a three-stage progressive data construction pipeline: dynamic sampling, iterative question-answering generation, and multi-level quality assessment, with tasks systematically stratified by difficulty.
Contribution/Results: Evaluating leading LLM-based agent systems using ROUGE and other metrics reveals that while commercial models outperform open-source counterparts overall, their performance degrades significantly on higher-order collaborative tasks and exhibits heightened sensitivity to input noise. MSCoRe fills a critical gap in multi-stage reasoning evaluation, establishing a rigorous, domain-diverse standard and benchmark to advance LLM agents’ capabilities in realistic, complex environments.
📝 Abstract
Large Language Models (LLMs) have excelled in question-answering (QA) tasks within single domains. However, their reasoning and coordination capabilities in complex, multi-stage scenarios remain underexplored. Existing benchmarks typically focus on isolated tasks or narrow domains, overlooking models' abilities for multi-stage collaboration and optimization without explicit external guidance. To bridge this gap, we propose extbf{MSCoRe}, a novel benchmark comprising 126696 domain-specific QA instances spanning scenarios in automotive, pharmaceutical, electronics, and energy sectors. The dataset is created using a structured three-phase pipeline: dynamic sampling, iterative question-answer generation, and a multi-level quality assessment to ensure data quality. Tasks are further categorized into three difficulty levels according to stage coverage and complexity. With MSCoRe, we have conducted a comprehensive evaluation of various state-of-the-art LLM agents. The commercial models performed best across all tasks and scenarios, but a notable gap in ROUGE scores remains between simple and complex tasks. We also tested the models' robustness and found that their performance is negatively affected by noisy data. MSCoRe provides a valuable new resource for the community to evaluate and improve multi-stage reasoning in LLM agents. The code and data are available at https://github.com/D3E0-source/MSCoRE.