Teaching LLMs for Step-Level Automatic Math Correction via Reinforcement Learning

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automated mathematical assessment systems predominantly evaluate only final answers, lacking fine-grained step-level error detection and pedagogically meaningful feedback. Method: This paper pioneers modeling step-level mathematical error correction as a reinforcement learning (RL) task. We propose a spatially constrained policy network to enhance training stability and design a fine-grained reward network that transforms sparse human feedback into differentiable, continuous reward signals. Additionally, we introduce a novel text classification-to-RL translation framework to enable step-level error correction fine-tuning of large language models (LLMs). Contribution/Results: Evaluated on two benchmark datasets, our approach significantly outperforms 11 strong baselines. It achieves substantial improvements in step-level error localization accuracy while also enhancing the model’s semantic understanding and logical reasoning capabilities—thereby overcoming the fundamental limitations of conventional answer-level evaluation paradigms.

Technology Category

Application Category

📝 Abstract
Automatic math correction aims to check students' solutions to mathematical problems via artificial intelligence technologies. Most existing studies focus on judging the final answer at the problem level, while they ignore detailed feedback on each step in a math problem-solving process, which requires abilities of semantic understanding and reasoning. In this paper, we propose a reinforcement learning (RL)-based method to boost large language model (LLM) for step-level automatic math correction, named StepAMC. Particularly, we convert the step-level automatic math correction within the text classification task into an RL problem to enhance the reasoning capabilities of LLMs. Then, we design a space-constrained policy network to improve the stability of RL. Then, we introduce a fine-grained reward network to convert the binary human feedback into a continuous value. We conduct extensive experiments over two benchmark datasets and the results show that our model outperforms the eleven strong baselines.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs for step-level math correction
Convert math correction into RL problem
Improve reasoning with fine-grained feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning boosts LLM math correction
Space-constrained policy network enhances RL stability
Fine-grained reward network processes human feedback
🔎 Similar Papers
No similar papers found.
Junsong Li
Junsong Li
East China Normal University
NLPLLMNLI
J
Jie Zhou
School of Computer Science and Technology, East China Normal University, Shanghai, China
Y
Yutao Yang
School of Computer Science and Technology, East China Normal University, Shanghai, China
Bihao Zhan
Bihao Zhan
East China Normal University
CLLLMRAGKG
Qianjun Pan
Qianjun Pan
East China Normal University
LLM
Yuyang Ding
Yuyang Ding
Soochow University
natural language processing
Q
Qin Chen
School of Computer Science and Technology, East China Normal University, Shanghai, China
J
Jiang Bo
Lab of AI for Education, East China Normal University, Shanghai, China
X
Xin Lin
Lab of AI for Education, East China Normal University, Shanghai, China
L
Liang He
School of Computer Science and Technology, East China Normal University, Shanghai, China