Towards Formal Verification of LLM-Generated Code from Natural Language Prompts

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLMs frequently generate erroneous code that users struggle to detect, undermining the reliability and usability of natural language programming. To address this, we propose Astrogator—the first end-to-end formal verification system for Ansible. It introduces a formal yet natural-language-inspired query language enabling non-expert users to precisely specify their intent. Combining program behavior calculus with a symbolic interpreter, Astrogator automatically verifies whether LLM-generated code satisfies user-specified intent—without requiring manual annotations or predefined specifications—and supports rigorous intent-consistency checking. Evaluated on a benchmark of 21 real-world tasks, Astrogator successfully verifies 83% of correct code snippets and detects 92% of erroneous ones. This advances the trustworthiness and accessibility of AI-powered programming assistants in infrastructure-as-code domains.

Technology Category

Application Category

📝 Abstract
In the past few years LLMs have emerged as a tool that can aid programmers by taking natural language descriptions and generating code based on it. However, LLMs often generate incorrect code that users need to fix and the literature suggests users often struggle to detect these errors. In this work we seek to offer formal guarantees of correctness to LLM generated code; such guarantees could improve the experience of using AI Code Assistants and potentially enable natural language programming for users with little or no programming knowledge. To address this challenge we propose to incorporate a formal query language that can represent a user's intent in a formally defined but natural language-like manner that a user can confirm matches their intent. Then, using such a query we propose to verify LLM generated code to ensure it matches the user's intent. We implement these ideas in our system, Astrogator, for the Ansible programming language which includes such a formal query language, a calculus for representing the behavior of Ansible programs, and a symbolic interpreter which is used for the verification. On a benchmark suite of 21 code-generation tasks, our verifier is able to verify correct code in 83% of cases and identify incorrect code in 92%.
Problem

Research questions and friction points this paper is trying to address.

Ensuring correctness of LLM-generated code from natural language prompts
Providing formal guarantees for user intent matching in generated code
Verifying code accuracy in AI-assisted programming for non-experts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formal query language for user intent representation
Verification of LLM code against user intent
Symbolic interpreter for formal code verification
🔎 Similar Papers
No similar papers found.
Aaron Councilman
Aaron Councilman
PhD Candidate, University of Illinois at Urbana-Champaign
programming languages
D
David Fu
University of Illinois at Urbana-Champaign, USA
A
Aryan Gupta
University of Illinois at Urbana-Champaign, USA
C
Chengxiao Wang
University of Illinois at Urbana-Champaign, USA
David Grove
David Grove
IBM Research
Programming Languages
Y
Yu-Xiong Wang
University of Illinois at Urbana-Champaign, USA
Vikram Adve
Vikram Adve
University of Illinois at Urbana-Champaign
CompilersProgramming LanguagesParallel ComputingComputer Security