R-Bench: Graduate-level Multi-disciplinary Benchmarks for LLM&MLLM Complex Reasoning Evaluation

📅 2025-05-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reasoning benchmarks inadequately assess the complex, interdisciplinary, and multimodal reasoning capabilities required in real-world scenarios. Method: We introduce R-Bench—the first graduate-level, bilingual (English–Chinese), interdisciplinary, and multimodal reasoning benchmark—covering 108 English and 83 Chinese disciplines, with 1,094 language-based and 665 multimodal questions. R-Bench pioneers three novel design principles: cross-lingual alignment, disciplinary balance, and Olympiad-level difficulty control. Question generation and validation follow educational and cognitive science principles, incorporating expert annotation, bilingual alignment, calibrated difficulty scoring, and inter-annotator consistency verification. Contribution/Results: Comprehensive evaluation reveals substantial limitations of large language models in complex multimodal reasoning: OpenAI o1 achieves only 53.2% accuracy. The benchmark dataset and code are publicly released, establishing a standardized, rigorous evaluation paradigm for advanced reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Reasoning stands as a cornerstone of intelligence, enabling the synthesis of existing knowledge to solve complex problems. Despite remarkable progress, existing reasoning benchmarks often fail to rigorously evaluate the nuanced reasoning capabilities required for complex, real-world problemsolving, particularly in multi-disciplinary and multimodal contexts. In this paper, we introduce a graduate-level, multi-disciplinary, EnglishChinese benchmark, dubbed as Reasoning Bench (R-Bench), for assessing the reasoning capability of both language and multimodal models. RBench spans 1,094 questions across 108 subjects for language model evaluation and 665 questions across 83 subjects for multimodal model testing in both English and Chinese. These questions are meticulously curated to ensure rigorous difficulty calibration, subject balance, and crosslinguistic alignment, enabling the assessment to be an Olympiad-level multi-disciplinary benchmark. We evaluate widely used models, including OpenAI o1, GPT-4o, DeepSeek-R1, etc. Experimental results indicate that advanced models perform poorly on complex reasoning, especially multimodal reasoning. Even the top-performing model OpenAI o1 achieves only 53.2% accuracy on our multimodal evaluation. Data and code are made publicly available at here.
Problem

Research questions and friction points this paper is trying to address.

Evaluating nuanced reasoning in multi-disciplinary contexts
Assessing complex reasoning in multimodal and language models
Addressing gaps in current reasoning benchmarks' difficulty and scope
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graduate-level multi-disciplinary benchmark for reasoning evaluation
English-Chinese bilingual assessment for language and multimodal models
Olympiad-level difficulty calibration and cross-linguistic alignment
🔎 Similar Papers
No similar papers found.