LogicPuzzleRL: Cultivating Robust Mathematical Reasoning in LLMs via Reinforcement Learning

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited generalization on structured reasoning tasks, as supervised fine-tuning often induces domain-specific heuristics rather than robust, generalizable reasoning strategies. Method: We propose a “learning-by-playing” reinforcement learning framework that trains LLMs on seven custom-designed logic puzzles—spanning constraint propagation, spatial consistency, and symbolic deduction—using binary correctness rewards to drive iterative hypothesis generation and refinement. The approach employs policy gradient optimization without external symbolic tools or supervision. Contribution/Results: Our method significantly improves LLM generalization on medium-difficulty mathematical benchmarks (e.g., MATH, AMC), notably enhancing algebraic manipulation, geometric inference, and combinatorial reasoning. Crucially, it achieves, for the first time, the emergence of transferable structured reasoning capabilities in purely neural models through unsupervised, interactive puzzle solving—demonstrating that systematic multi-step reasoning can be internalized end-to-end without architectural or external tooling constraints.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at many supervised tasks but often struggle with structured reasoning in unfamiliar settings. This discrepancy suggests that standard fine-tuning pipelines may instill narrow, domain-specific heuristics rather than fostering general-purpose thinking strategies. In this work, we propose a"play to learn"framework that fine-tunes LLMs through reinforcement learning on a suite of seven custom logic puzzles, each designed to cultivate distinct reasoning skills such as constraint propagation, spatial consistency, and symbolic deduction. Using a reinforcement learning setup with verifiable rewards, models receive binary feedback based on puzzle correctness, encouraging iterative, hypothesis-driven problem solving. We demonstrate that this training approach significantly improves out-of-distribution performance on a range of mathematical benchmarks, especially for mid-difficulty problems that require multi-step reasoning. Analyses across problem categories and difficulty levels reveal that puzzle training promotes transferable reasoning routines, strengthening algebraic manipulation, geometric inference, and combinatorial logic, while offering limited gains on rote or highly specialized tasks. These findings show that reinforcement learning over logic puzzles reshapes the internal reasoning of LLMs, enabling more robust and compositional generalization without relying on task-specific symbolic tools.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' structured reasoning in unfamiliar settings
Developing general-purpose thinking via reinforcement learning on logic puzzles
Improving multi-step mathematical reasoning without task-specific tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning fine-tunes LLMs on logic puzzles
Binary feedback encourages hypothesis-driven problem solving
Puzzle training enhances transferable multi-step reasoning skills
🔎 Similar Papers
No similar papers found.
Z
Zhen Hao Wong
Peking University
J
Jingwen Deng
Peking University
R
Runming He
Peking University
Zirong Chen
Zirong Chen
Vanderbilt University
cyber physical systemsnatural language processingartificial intelligencemachine learning
Q
Qijie You
Peking University
H
Hejun Dong
Peking University
H
Hao Liang
Peking University
C
Chengyu Shen
Peking University
B
Bin Cui
Peking University
Wentao Zhang
Wentao Zhang
Institute of Physics, Chinese Academy of Sciences
photoemissionsuperconductivitycupratehtsctime-resolved