🤖 AI Summary
RTL synthesis and summarization face significant challenges due to the strict syntax of hardware description languages, scarcity of supervised data, and weak alignment with natural language. This work proposes SYMDIREC, a novel neuro-symbolic divide-and-conquer framework that, for the first time, integrates symbolic planning into RTL tasks. By synergistically combining symbolic subgoal decomposition, retrieval-augmented generation, and large language model reasoning, SYMDIREC supports both Verilog and VHDL without requiring model fine-tuning. Experimental results demonstrate that the proposed approach achieves approximately a 20% improvement in Pass@1 on RTL synthesis and boosts ROUGE-L scores by 15–20% on summarization tasks, substantially outperforming existing baselines.
📝 Abstract
Register-Transfer Level (RTL) synthesis and summarization are central to hardware design automation but remain challenging for Large Language Models (LLMs) due to rigid HDL syntax, limited supervision, and weak alignment with natural language. Existing prompting and retrieval-augmented generation (RAG) methods have not incorporated symbolic planning, limiting their structural precision. We introduce SYMDIREC, a neuro-symbolic framework that decomposes RTL tasks into symbolic subgoals, retrieves relevant code via a fine-tuned retriever, and assembles verified outputs through LLM reasoning. Supporting both Verilog and VHDL without LLM fine-tuning, SYMDIREC achieves ~20% higher Pass@1 rates for synthesis and 15-20% ROUGE-L improvements for summarization over prompting and RAG baselines, demonstrating the benefits of symbolic guidance in RTL tasks.