StepFun-Formalizer: Unlocking the Autoformalization Potential of LLMs through Knowledge-Reasoning Fusion

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models exhibit low accuracy in automatic formalization of natural-language mathematical statements, primarily due to the lack of joint acquisition of formal-language domain knowledge and the ability to map informal natural language to formal logical representations. This paper introduces ThinkingF, the first systematic training framework that jointly integrates these two capabilities: (1) a formal-knowledge-augmented dataset, (2) expert-guided, templated reasoning trajectory generation, and (3) a hybrid optimization strategy combining knowledge distillation, supervised fine-tuning (SFT), and reinforcement learning with verifiable reward (RLVR). Evaluated on 7B and 32B models, StepFun-Formalizer-32B achieves BEq@1 scores of 40.5% on FormalMATH-Lite and 26.7% on ProverBench—substantially outperforming all general-purpose and task-specific baselines and establishing new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Autoformalization aims to translate natural-language mathematical statements into a formal language. While LLMs have accelerated progress in this area, existing methods still suffer from low accuracy. We identify two key abilities for effective autoformalization: comprehensive mastery of formal-language domain knowledge, and reasoning capability of natural language problem understanding and informal-formal alignment. Without the former, a model cannot identify the correct formal objects; without the latter, it struggles to interpret real-world contexts and map them precisely into formal expressions. To address these gaps, we introduce ThinkingF, a data synthesis and training pipeline that improves both abilities. First, we construct two datasets: one by distilling and selecting large-scale examples rich in formal knowledge, and another by generating informal-to-formal reasoning trajectories guided by expert-designed templates. We then apply SFT and RLVR with these datasets to further fuse and refine the two abilities. The resulting 7B and 32B models exhibit both comprehensive formal knowledge and strong informal-to-formal reasoning. Notably, StepFun-Formalizer-32B achieves SOTA BEq@1 scores of 40.5% on FormalMATH-Lite and 26.7% on ProverBench, surpassing all prior general-purpose and specialized models.
Problem

Research questions and friction points this paper is trying to address.

Improving autoformalization accuracy in LLMs
Enhancing formal-language knowledge and reasoning capabilities
Bridging natural-language understanding to formal expression mapping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge-Reasoning Fusion for autoformalization
Data synthesis with expert-guided reasoning trajectories
SFT and RLVR training to refine model abilities
🔎 Similar Papers
No similar papers found.
Y
Yutong Wu
SKL of Processors, Institute of Computing Technology, CAS, University of Chinese Academy of Sciences
D
Di Huang
SKL of Processors, Institute of Computing Technology, CAS
Ruosi Wan
Ruosi Wan
Stepfun
deep learningoptimizationLarge Language Model
Yue Peng
Yue Peng
University of Science and Technology of China
geometry optimizationphysical simulation
S
Shijie Shang
StepFun Inc.
C
Chenrui Cao
SKL of Processors, Institute of Computing Technology, CAS, University of Science and Technology of China
L
Lei Qi
SKL of Processors, Institute of Computing Technology, CAS, University of Science and Technology of China
R
Rui Zhang
SKL of Processors, Institute of Computing Technology, CAS
Z
Zidong Du
SKL of Processors, Institute of Computing Technology, CAS
Jie Yan
Jie Yan
jieyan@amss.ac.cn
deep generative modelsclustering
X
Xing Hu
SKL of Processors, Institute of Computing Technology, CAS