Re:Form -- Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) rely on informal natural language, limiting their ability to generate formally verifiable programs; their reinforcement learning (RL) training suffers from unreliable and non-scalable reward signals due to the absence of rigorous program correctness verification. Method: We propose a formal-language–based reasoning paradigm grounded in Dafny—a specification-aware, verification-capable programming language—and introduce a closed-loop training framework enabling fully automated, mathematically provable program verification. Contribution/Results: (1) We release DafnyComp, the first benchmark for formal program synthesis and verification; (2) we design an automated data curation pipeline and an RL training mechanism tightly integrated with Dafny’s built-in verifier; (3) we propose a hybrid supervised fine-tuning + RL regularization strategy that drastically reduces reliance on human-written chain-of-thought annotations. Experiments show that a 0.5B-parameter model outperforms leading proprietary LLMs on DafnyComp, with RL fine-tuning further enhancing generalization—paving a scalable path toward trustworthy, large-scale formal software verification.

Technology Category

Application Category

📝 Abstract
Existing informal language-based (e.g., human language) Large Language Models (LLMs) trained with Reinforcement Learning (RL) face a significant challenge: their verification processes, which provide crucial training signals, are neither reliable nor scalable. In fact, the prevalent large proprietary models could hardly generate verifiable programs. A promising yet largely uncharted alternative is formal language-based reasoning. Grounding LLMs in rigorous formal systems where generative models operate in formal language spaces (e.g., Dafny) enables the automatic and mathematically provable verification of their reasoning processes and outcomes. This capability is pivotal for achieving large-scale, reliable formal software verification. It is a common practice to employ human-annotated chain-of-thought and other human priors to induce the reasoning and coding capabilities of LLMs. Unfortunately, it becomes unacceptably all-consuming to provide such priors for supervising complex programming tasks. In this work, we systematically explore ways to reduce human priors with the formal language, Dafny, as the main environment for our pilot study. Our pipeline mainly relies on introducing an automatic and scalable data curation pipeline, and careful RL designs integrated with feedback from the formal language verifier. We introduce DafnyComp, a benchmark of compositional formal programs with auto-formalized specifications for specification reasoning. Our supervised fine-tuning (SFT) stage enables even small models (e.g., 0.5B) to generate syntactically valid and verifiable Dafny code, surpassing proprietary models. RL with regularization further improves performance, achieving stronger generalization to out-of-domain tasks and outperforming all strong baselines on the challenging DafnyComp benchmark.
Problem

Research questions and friction points this paper is trying to address.

Improving scalability and reliability in formal software verification using LLMs
Reducing dependence on human priors for complex programming tasks
Enhancing generalization and performance in formal language-based reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses formal language Dafny for reliable verification
Reduces human priors with automatic data curation
Integrates RL with formal verifier feedback
🔎 Similar Papers
No similar papers found.
C
Chuanhao Yan
Shanghai AI Laboratory
Fengdi Che
Fengdi Che
university of alberta
artificial intelligence
Xuhan Huang
Xuhan Huang
The Chinese University of Hong Kong, Shenzhen
Machine learning
X
Xu Xu
Shanghai AI Laboratory
X
Xin Li
Shanghai AI Laboratory
Yizhi Li
Yizhi Li
University of Manchester, M-A-P
LLMReasoningPost-trainingComputational Music
X
Xingwei Qu
Shanghai AI Laboratory
Jingzhe Shi
Jingzhe Shi
Tsinghua University
Deep LearningScaling LawLarge Language ModelsTime Series
Z
Zhuangzhuang He
Shanghai AI Laboratory
Chenghua Lin
Chenghua Lin
Professor of Natural Language Processing, University of Manchester
Natural language processingnatural language generationmachine learning
Y
Yaodong Yang
Shanghai AI Laboratory
B
Binhang Yuan
Shanghai AI Laboratory
H
Hang Zhao
Shanghai AI Laboratory
Y
Yu Qiao
Shanghai AI Laboratory
B
Bowen Zhou
Shanghai AI Laboratory
J
Jie Fu
Shanghai AI Laboratory