SpecLoop: An Agentic RTL-to-Specification Framework with Formal Verification Feedback Loop

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and error-proneness in understanding, maintaining, and verifying RTL designs, which often stem from inconsistent or outdated specification documents. To tackle this challenge, the paper proposes an agent-based framework that uniquely integrates large language models (LLMs) with formal equivalence checking. Through a closed-loop, iterative feedback mechanism, the framework automatically generates and continuously refines specification documents directly from RTL code, ensuring functional consistency. By synergistically combining formal verification, LLM-based reasoning, agent architecture, and reverse synthesis techniques, the approach significantly outperforms pure LLM-driven methods across multiple benchmarks, markedly improving the correctness, robustness, and consistency of the generated specifications.

Technology Category

Application Category

📝 Abstract
RTL implementations frequently lack up-to-date or consistent specifications, making comprehension, maintenance, and verification costly and error-prone. While prior work has explored generating specifications from RTL using large language models (LLMs), ensuring that the generated documents faithfully capture design intent remains a major challenge. We present SpecLoop, an agentic framework for RTL-to-specification generation with a formal-verification-driven iterative feedback loop. SpecLoop first generates candidate specifications and then reconstructs RTL from these specifications; it uses formal equivalence checking tools between the reconstructed RTL and the original design to validate functional consistency. When mismatches are detected, counterexamples are fed back to iteratively refine the specifications until equivalence is proven or no further progress can be made. Experiments across multiple LLMs and RTL benchmarks show that incorporating formal verification feedback substantially improves specification correctness and robustness over LLM-only baselines, demonstrating the effectiveness of verification-guided specification generation.
Problem

Research questions and friction points this paper is trying to address.

RTL
specification generation
formal verification
design intent
functional consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

formal verification
specification generation
RTL
feedback loop
large language models
🔎 Similar Papers
No similar papers found.
Fu-Chieh Chang
Fu-Chieh Chang
Unknown affiliation
Y
Yu-Hsin Yang
Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
H
Hung-Ming Huang
Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
Y
Yun-Chia Hsu
MediaTek Inc, Hsinchu, Taiwan
Y
Yin-Yu Lin
MediaTek Inc, Hsinchu, Taiwan
M
Ming-Fang Tsai
MediaTek Inc, Hsinchu, Taiwan
C
Chun-Chih Yang
MediaTek Inc, Hsinchu, Taiwan
P
Pei-Yuan Wu
Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan