Are Your LLMs Capable of Stable Reasoning?

📅 2024-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) exhibit insufficient accuracy and instability in complex mathematical reasoning, while mainstream evaluation methods—relying on static benchmarks, suffering from data leakage, and employing single-shot sampling—fail to assess true reasoning robustness. Method: We propose G-Pass@k, a novel continuous-sampling evaluation metric that jointly quantifies both peak performance and output consistency; introduce LiveMathBench, a dynamic, data-leakage-resistant mathematical benchmark with periodic updates; and release an open-source, reproducible multi-round statistical evaluation framework. Contribution/Results: Experiments across leading LLMs reveal a strong inverse correlation between high Pass@1 accuracy and low G-Pass@k stability, demonstrating that “single-shot optimality” does not imply “robust reliability.” This work shifts the LLM evaluation paradigm from maximizing point-estimate scores toward prioritizing reliability, consistency, and reproducibility in mathematical reasoning assessment.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs) has demonstrated remarkable progress in complex reasoning tasks. However, a significant discrepancy persists between benchmark performances and real-world applications. We identify this gap as primarily stemming from current evaluation protocols and metrics, which inadequately capture the full spectrum of LLM capabilities, particularly in complex reasoning tasks where both accuracy and consistency are crucial. This work makes two key contributions. First, we introduce G-Pass@k, a novel evaluation metric that provides a continuous assessment of model performance across multiple sampling attempts, quantifying both the model's peak performance potential and its stability. Second, we present LiveMathBench, a dynamic benchmark comprising challenging, contemporary mathematical problems designed to minimize data leakage risks during evaluation. Through extensive experiments using G-Pass@k on state-of-the-art LLMs with LiveMathBench, we provide comprehensive insights into both their maximum capabilities and operational consistency. Our findings reveal substantial room for improvement in LLMs'"realistic"reasoning capabilities, highlighting the need for more robust evaluation methods. The benchmark and detailed results are available at: https://github.com/open-compass/GPassK.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Complex Problem Solving
Mathematical Challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

G-Pass@k
LiveMathBench
Complex Problem Solving
🔎 Similar Papers
No similar papers found.
J
Junnan Liu
Shanghai AI Laboratory
H
Hongwei Liu
Shanghai AI Laboratory
L
Linchen Xiao
Shanghai AI Laboratory
Z
Ziyi Wang
Shanghai AI Laboratory
Kuikun Liu
Kuikun Liu
Shanghai AI Laboratory
S
Songyang Gao
Shanghai AI Laboratory
Wenwei Zhang
Wenwei Zhang
Shanghai AI Laboratory
Large Language ModelScalable OversightArtificial Intelligence
S
Songyang Zhang
Shanghai AI Laboratory
K
Kai Chen
Shanghai AI Laboratory