🤖 AI Summary
Existing automated mathematical assessment systems predominantly evaluate only final answers, lacking fine-grained step-level error detection and pedagogically meaningful feedback.
Method: This paper pioneers modeling step-level mathematical error correction as a reinforcement learning (RL) task. We propose a spatially constrained policy network to enhance training stability and design a fine-grained reward network that transforms sparse human feedback into differentiable, continuous reward signals. Additionally, we introduce a novel text classification-to-RL translation framework to enable step-level error correction fine-tuning of large language models (LLMs).
Contribution/Results: Evaluated on two benchmark datasets, our approach significantly outperforms 11 strong baselines. It achieves substantial improvements in step-level error localization accuracy while also enhancing the model’s semantic understanding and logical reasoning capabilities—thereby overcoming the fundamental limitations of conventional answer-level evaluation paradigms.
📝 Abstract
Automatic math correction aims to check students' solutions to mathematical problems via artificial intelligence technologies. Most existing studies focus on judging the final answer at the problem level, while they ignore detailed feedback on each step in a math problem-solving process, which requires abilities of semantic understanding and reasoning. In this paper, we propose a reinforcement learning (RL)-based method to boost large language model (LLM) for step-level automatic math correction, named StepAMC. Particularly, we convert the step-level automatic math correction within the text classification task into an RL problem to enhance the reasoning capabilities of LLMs. Then, we design a space-constrained policy network to improve the stability of RL. Then, we introduce a fine-grained reward network to convert the binary human feedback into a continuous value. We conduct extensive experiments over two benchmark datasets and the results show that our model outperforms the eleven strong baselines.