Inferring multiple helper Dafny assertions with LLMs

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dafny provides strong correctness guarantees but imposes high verification overhead due to extensive manually written auxiliary assertions. This paper introduces DAISY, the first systematic framework for automatic assertion inference by large language models (LLMs) in scenarios with multiple missing assertions. Its core contributions are: (1) a hybrid fault localization method integrating LLM-generated candidates with Dafny’s error diagnostics; (2) a verification-oriented assertion taxonomy tailored to Dafny’s specification language; and (3) the empirical finding that verification is robust to assertion redundancy—full recovery of original assertions is unnecessary for successful proof. Evaluation shows DAISY achieves 63.4% verification success rate under single-missing-assertion conditions and 31.7% under multi-missing-assertion conditions. Notably, for some programs, DAISY completes equivalent verification with fewer assertions than the original, thereby reducing proof engineering effort.

Technology Category

Application Category

📝 Abstract
The Dafny verifier provides strong correctness guarantees but often requires numerous manual helper assertions, creating a significant barrier to adoption. We investigate the use of Large Language Models (LLMs) to automatically infer missing helper assertions in Dafny programs, with a primary focus on cases involving multiple missing assertions. To support this study, we extend the DafnyBench benchmark with curated datasets where one, two, or all assertions are removed, and we introduce a taxonomy of assertion types to analyze inference difficulty. Our approach refines fault localization through a hybrid method that combines LLM predictions with error-message heuristics. We implement this approach in a new tool called DAISY (Dafny Assertion Inference SYstem). While our focus is on multiple missing assertions, we also evaluate DAISY on single-assertion cases. DAISY verifies 63.4% of programs with one missing assertion and 31.7% with multiple missing assertions. Notably, many programs can be verified with fewer assertions than originally present, highlighting that proofs often admit multiple valid repair strategies and that recovering every original assertion is unnecessary. These results demonstrate that automated assertion inference can substantially reduce proof engineering effort and represent a step toward more scalable and accessible formal verification.
Problem

Research questions and friction points this paper is trying to address.

Automatically inferring multiple missing helper assertions in Dafny programs
Reducing manual proof effort in formal verification through LLM-based assistance
Addressing scalability challenges in Dafny verification with assertion inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically infers missing Dafny helper assertions using LLMs
Combines LLM predictions with error-message heuristics for localization
Implements hybrid approach in DAISY tool for verification