🤖 AI Summary
This work investigates how to enable small open-source language models to achieve reasoning performance on challenging Olympiad-level mathematical proof tasks comparable to that of large closed-source models. To this end, the authors propose a three-stage training framework comprising supervised fine-tuning, rule-based reward modeling for reinforcement learning, and an iterative refinement mechanism augmented with reasoning caches. The resulting 4B-parameter model, QED-Nano, is the first at this scale to approach the proving capability of Gemini 3 Pro, significantly outperforming larger open-source counterparts such as Nomos-1 and GPT-OSS-120B while substantially reducing inference costs. The project releases the complete training and evaluation pipeline and introduces, for the first time, a reasoning cache mechanism to facilitate stepwise optimization of long-form proofs.
📝 Abstract
Proprietary AI systems have recently demonstrated impressive capabilities on complex proof-based problems, with gold-level performance reported at the 2025 International Mathematical Olympiad (IMO). However, the training pipelines behind these systems remain largely undisclosed, and their reliance on large "internal" models and scaffolds makes them expensive to run, difficult to reproduce, and hard to study or improve upon. This raises a central question: can small, open models also be trained to achieve competitive reasoning performance on difficult Olympiad-level math? In this paper, we answer this question by building QED-Nano, a 4B model post-trained for Olympiad-level proofs. Our training recipe has three stages: (1) supervised fine-tuning to imbue good proof-writing styles by distilling from DeepSeek-Math-V2, (2) reinforcement learning (RL) with rubric-based rewards, and (3) expanding RL with a reasoning cache, which decomposes long proofs into iterative summarize-and-refine cycles and enables stronger test-time reasoning. QED-Nano surpasses the proof-generation performance of much larger open models, including Nomos-1 and GPT-OSS-120B, and approaches the performance of proprietary models like Gemini 3 Pro, at a fraction of the inference cost. To support further research on open mathematical reasoning, we release the full QED-Nano pipeline, including the QED-Nano and QED-Nano-SFT models, the FineProofs-SFT and FineProofs-RL datasets, and the training and evaluation code.