Towards LLM-based Generation of Human-Readable Proofs in Polynomial Formal Verification

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor readability and verifiability of proofs in polynomial formal verification (PFV), this paper introduces large language models (LLMs) to PFV for the first time, proposing a human-in-the-loop interactive proof generation framework. The framework integrates prompt engineering with formal reasoning engines and designs a verifiable interaction protocol that ensures mathematical rigor while producing structured, logically coherent, and human-readable proofs. Experimental evaluation on representative PFV benchmarks demonstrates a significant improvement in automated proof verification success rates, with manageable computational overhead. The core contribution is the establishment of a trustworthy LLM–formal verification co-processing paradigm that jointly ensures interpretability, formal verifiability, and practical utility—thereby opening a novel pathway for formal verification in circuit and system design.

Technology Category

Application Category

📝 Abstract
Verification is one of the central tasks in circuit and system design. While simulation and emulation are widely used, complete correctness can only be ensured based on formal proof techniques. But these approaches often have very high run time and memory requirements. Recently, Polynomial Formal Verification (PFV) has been introduced showing that for many instances of practical relevance upper bounds on needed resources can be given. But proofs have to be provided that are human-readable. Here, we study how modern approaches from Artificial Intelligence (AI) based on Large Language Models (LLMs) can be used to generate proofs that later on can be validated based on reasoning engines. Examples are given that show how LLMs can interact with proof engines, and directions for future work are outlined.
Problem

Research questions and friction points this paper is trying to address.

Generating human-readable proofs in formal verification
Reducing high runtime and memory in verification techniques
Integrating LLMs with proof engines for validation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based human-readable proof generation
Integration with polynomial formal verification
AI-driven interaction with proof engines