🤖 AI Summary
Existing LLM-based RTL code generation methods suffer from outdated IP knowledge, poor adaptability to state-of-the-art LLMs, and inefficient RAG integration. Method: This paper proposes a fine-tuning-free, model-agnostic retrieval-augmented framework that dynamically injects high-quality Verilog IP knowledge during inference—without modifying model parameters—via context-aware retrieval and a large-scale, expert-curated Verilog dataset. The framework is fully compatible with mainstream commercial LLMs and deployed on Hugging Face. Contribution/Results: Evaluated on VerilogEval, our method achieves a 16.8% absolute accuracy gain over baseline fine-tuning and traditional RAG approaches. The implementation is open-sourced and deployed as an interactive Hugging Face Space, advancing the practical adoption of AI-driven hardware design automation.
📝 Abstract
As large language models (LLMs) continue to be integrated into modern technology, there has been an increased push towards code generation applications, which also naturally extends to hardware design automation. LLM-based solutions for register transfer level (RTL) code generation for intellectual property (IP) designs have grown, especially with fine-tuned LLMs, prompt engineering, and agentic approaches becoming popular in literature. However, a gap has been exposed in these techniques, as they fail to integrate novel IPs into the model's knowledge base, subsequently resulting in poorly generated code. Additionally, as general-purpose LLMs continue to improve, fine-tuned methods on older models will not be able to compete to produce more accurate and efficient designs. Although some retrieval augmented generation (RAG) techniques exist to mitigate challenges presented in fine-tuning approaches, works tend to leverage low-quality codebases, incorporate computationally expensive fine-tuning in the frameworks, or do not use RAG directly in the RTL generation step. In this work, we introduce DeepV: a model-agnostic RAG framework to generate RTL designs by enhancing context through a large, high-quality dataset without any RTL-specific training. Our framework benefits the latest commercial LLM, OpenAI's GPT-5, with a near 17% increase in performance on the VerilogEval benchmark. We host DeepV for use by the community in a Hugging Face (HF) Space: https://huggingface.co/spaces/FICS-LLM/DeepV.