🤖 AI Summary
Existing large language model (LLM)-driven approaches to RTL generation are prone to structural drift when specifications evolve, often necessitating full regeneration and thereby compromising both consistency and efficiency. To address this limitation, this work proposes an incremental RTL generation framework that introduces, for the first time, a requirement-to-code traceability mechanism. By precisely identifying affected code segments, the framework enables localized regeneration coupled with consistency validation, achieving efficient and coherent iterative updates. Experimental evaluation on our newly constructed EvoRTL-Bench benchmark demonstrates that the proposed method significantly improves both regeneration consistency and computational efficiency, marking a critical step toward practical engineering adoption of LLM-driven RTL synthesis.
📝 Abstract
Large language models (LLMs) have shown promise in generating RTL code from natural-language descriptions, but existing methods remain static and struggle to adapt to evolving design requirements, potentially causing structural drift and costly full regeneration. We propose IncreRTL, a LLM-driven framework for incremental RTL generation under requirement evolution. By constructing requirement-code traceability links to locate and regenerate affected code segments, IncreRTL achieves accurate and consistent updates. Evaluated on our newly constructed EvoRTL-Bench, IncreRTL demonstrates notable improvements in regeneration consistency and efficiency, advancing LLM-based RTL generation toward practical engineering deployment.