🤖 AI Summary
This work addresses the high cost and error-proneness in understanding, maintaining, and verifying RTL designs, which often stem from inconsistent or outdated specification documents. To tackle this challenge, the paper proposes an agent-based framework that uniquely integrates large language models (LLMs) with formal equivalence checking. Through a closed-loop, iterative feedback mechanism, the framework automatically generates and continuously refines specification documents directly from RTL code, ensuring functional consistency. By synergistically combining formal verification, LLM-based reasoning, agent architecture, and reverse synthesis techniques, the approach significantly outperforms pure LLM-driven methods across multiple benchmarks, markedly improving the correctness, robustness, and consistency of the generated specifications.
📝 Abstract
RTL implementations frequently lack up-to-date or consistent specifications, making comprehension, maintenance, and verification costly and error-prone. While prior work has explored generating specifications from RTL using large language models (LLMs), ensuring that the generated documents faithfully capture design intent remains a major challenge. We present SpecLoop, an agentic framework for RTL-to-specification generation with a formal-verification-driven iterative feedback loop. SpecLoop first generates candidate specifications and then reconstructs RTL from these specifications; it uses formal equivalence checking tools between the reconstructed RTL and the original design to validate functional consistency. When mismatches are detected, counterexamples are fed back to iteratively refine the specifications until equivalence is proven or no further progress can be made. Experiments across multiple LLMs and RTL benchmarks show that incorporating formal verification feedback substantially improves specification correctness and robustness over LLM-only baselines, demonstrating the effectiveness of verification-guided specification generation.