L0-Reasoning Bench: Evaluating Procedural Correctness in Language Models via Simple Program Execution

📅 2025-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of language models overemphasize final answer accuracy while neglecting the correctness of intermediate reasoning steps—particularly “level-0” process consistency, i.e., faithful execution of elementary, rule-based chains. Method: The authors introduce L0-Bench, a novel benchmark built from synthetically generated Python functions with human-annotated step-by-step execution traces; it formally defines and quantifies level-0 reasoning capability and establishes program execution traces as the gold standard for process correctness evaluation. The benchmark enables multidimensional, controllable assessment—including context length, voting ensemble size, and reasoning step count. Results: Experiments reveal systematic degradation in process consistency across all models as reasoning depth increases; although larger models and inference-augmented variants exhibit greater robustness, they still face fundamental bottlenecks in maintaining basic procedural fidelity. These findings provide critical diagnostic insights and concrete directions for building reliable, stepwise-reasoning systems.

Technology Category

Application Category

📝 Abstract
Complex reasoning tasks often rely on the ability to consistently and accurately apply simple rules across incremental steps, a foundational capability which we term"level-0"reasoning. To systematically evaluate this capability, we introduce L0-Bench, a language model benchmark for testing procedural correctness -- the ability to generate correct reasoning processes, complementing existing benchmarks that primarily focus on outcome correctness. Given synthetic Python functions with simple operations, L0-Bench grades models on their ability to generate step-by-step, error-free execution traces. The synthetic nature of L0-Bench enables systematic and scalable generation of test programs along various axes (e.g., number of trace steps). We evaluate a diverse array of recent closed-source and open-weight models on a baseline test set. All models exhibit degradation as the number of target trace steps increases, while larger models and reasoning-enhanced models better maintain correctness over multiple steps. Additionally, we use L0-Bench to explore test-time scaling along three dimensions: input context length, number of solutions for majority voting, and inference steps. Our results suggest substantial room to improve"level-0"reasoning and potential directions to build more reliable reasoning systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating procedural correctness in language models via simple program execution
Testing models' ability to generate step-by-step, error-free execution traces
Improving level-0 reasoning for more reliable reasoning systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces L0-Bench for procedural correctness evaluation
Uses synthetic Python functions for systematic testing
Explores test-time scaling across multiple dimensions
🔎 Similar Papers
No similar papers found.
S
Simeng Sun
C
Cheng-Ping Hsieh
Faisal Ladhak
Faisal Ladhak
Nvidia
NLP
E
Erik Arakelyan
S
Santiago Akle Serano
Boris Ginsburg
Boris Ginsburg
NVIDIA
Deep LearningSpeech RecognitionSpeech Synthesis