🤖 AI Summary
Hardware behavior-driven development (BDD) struggles to scale in complex systems due to its reliance on manual extraction of behavioral scenarios from natural-language specifications. To address this, we propose the first systematic integration of large language models (LLMs) into the hardware BDD workflow, introducing an end-to-end framework comprising *specification understanding*, *scenario generation*, and *formal mapping*. This framework automates the translation of textual specifications into executable UVM tests. Leveraging fine-tuned Llama and Qwen models—enhanced by prompt engineering and formal specification transformation techniques—our approach achieves 87% scenario generation accuracy across multiple RISC-V submodules. It reduces manual test development time by 65% on average and enables bidirectional, traceable verification between specifications and tests. By bridging the methodological gap between software BDD and hardware verification, our work establishes a foundation for scalable, specification-centric hardware validation.
📝 Abstract
Test and verification are essential activities in hardware and system design, but their complexity grows significantly with increasing system sizes. While Behavior Driven Development (BDD) has proven effective in software engineering, it is not yet well established in hardware design, and its practical use remains limited. One contributing factor is the manual effort required to derive precise behavioral scenarios from textual specifications.
Recent advances in Large Language Models (LLMs) offer new opportunities to automate this step. In this paper, we investigate the use of LLM-based techniques to support BDD in the context of hardware design.