MorphoBench: A Benchmark with Difficulty Adaptive to Model Reasoning

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language model (LLM) reasoning evaluation benchmarks suffer from narrow disciplinary coverage and static, fixed difficulty levels. To address these limitations, this paper introduces the first multi-disciplinary reasoning assessment platform with dynamically evolving difficulty: it automatically adjusts question difficulty based on key reasoning statements generated by models, enabling the first implementation of adaptive, self-evolving evaluation difficulty. The platform integrates Olympiad-level real-problem curation with simulation-software-based synthetic question generation, ensuring high question quality while substantially reducing construction costs. We construct and iteratively refine a benchmark of over 1,300 high-quality test items spanning diverse disciplines. Extensive experiments demonstrate that the platform effectively supports fine-grained reasoning capability assessment across state-of-the-art models, including o3 and GPT-5. The code is publicly released.

Technology Category

Application Category

📝 Abstract
With the advancement of powerful large-scale reasoning models, effectively evaluating the reasoning capabilities of these models has become increasingly important. However, existing benchmarks designed to assess the reasoning abilities of large models tend to be limited in scope and lack the flexibility to adapt their difficulty according to the evolving reasoning capacities of the models. To address this, we propose MorphoBench, a benchmark that incorporates multidisciplinary questions to evaluate the reasoning capabilities of large models and can adjust and update question difficulty based on the reasoning abilities of advanced models. Specifically, we curate the benchmark by selecting and collecting complex reasoning questions from existing benchmarks and sources such as Olympiad-level competitions. Additionally, MorphoBench adaptively modifies the analytical challenge of questions by leveraging key statements generated during the model's reasoning process. Furthermore, it includes questions generated using simulation software, enabling dynamic adjustment of benchmark difficulty with minimal resource consumption. We have gathered over 1,300 test questions and iteratively adjusted the difficulty of MorphoBench based on the reasoning capabilities of models such as o3 and GPT-5. MorphoBench enhances the comprehensiveness and validity of model reasoning evaluation, providing reliable guidance for improving both the reasoning abilities and scientific robustness of large models. The code has been released in https://github.com/OpenDCAI/MorphoBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reasoning capabilities of large-scale models with adaptive difficulty
Addressing limited scope and inflexible difficulty in existing benchmarks
Providing multidisciplinary questions that dynamically adjust analytical challenge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts question difficulty using model reasoning statements
Incorporates simulation software for dynamic benchmark adjustment
Leverages multidisciplinary Olympiad-level questions for evaluation
🔎 Similar Papers
No similar papers found.