π€ AI Summary
Large language models (LLMs) lack the ability to evaluate intermediate reasoning steps and perform self-correction during complex reasoning. Method: We propose FINEREASON, a fine-grained logic puzzle benchmark that formalizes System 2βstyle reflective reasoning as verifiable atomic state operations. Specifically, we introduce two novel tasksβ*state checking* (verifying correctness of intermediate conclusions) and *state transition* (rectifying erroneous reasoning paths)βand construct multi-step reasoning trajectories over structured logic puzzles, accompanied by state-level annotation and supervised fine-tuning. Contribution/Results: Our work fills a critical gap in evaluating intermediate reasoning processes, enabling real-time diagnostic analysis and path-level correction. Experiments demonstrate a +5.1% improvement in mathematical reasoning accuracy on GSM8K, alongside substantial gains in reasoning robustness and interpretability.
π Abstract
Many challenging reasoning tasks require not just rapid, intuitive responses, but a more deliberate, multi-step approach. Recent progress in large language models (LLMs) highlights an important shift from the"System 1"way of quick reactions to the"System 2"style of reflection-and-correction problem solving. However, current benchmarks heavily rely on the final-answer accuracy, leaving much of a model's intermediate reasoning steps unexamined. This fails to assess the model's ability to reflect and rectify mistakes within the reasoning process. To bridge this gap, we introduce FINEREASON, a logic-puzzle benchmark for fine-grained evaluation of LLMs' reasoning capabilities. Each puzzle can be decomposed into atomic steps, making it ideal for rigorous validation of intermediate correctness. Building on this, we introduce two tasks: state checking, and state transition, for a comprehensive evaluation of how models assess the current situation and plan the next move. To support broader research, we also provide a puzzle training set aimed at enhancing performance on general mathematical tasks. We show that models trained on our state checking and transition data demonstrate gains in math reasoning by up to 5.1% on GSM8K.