GRPO with State Mutations: Improving LLM-Based Hardware Test Plan Generation

πŸ“… 2026-01-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that large language models (LLMs) struggle to generate effective, targeted test plans for hardware RTL verification, resulting in low success rates with existing approaches. To overcome this limitation, the authors propose a two-stage framework that decouples test plan generation from testbench execution and integrates supervised fine-tuning with a novel reinforcement learning algorithm, GRPO-SMu, significantly enhancing the LLM’s ability to produce valid hardware stimuli. A key innovation is the introduction of a tree-structured branching mutation strategy for constructing training data, which transcends the constraints of traditional linear mutation and improves the model’s reasoning over hardware specifications. Experimental results demonstrate that a 7B-parameter model achieves a 33.3% pass rate on golden tests and a 13.9% mutation detection rate, representing a 17.6% absolute improvement over the baseline and outperforming larger general-purpose models.

Technology Category

Application Category

πŸ“ Abstract
RTL design often relies heavily on ad-hoc testbench creation early in the design cycle. While large language models (LLMs) show promise for RTL code generation, their ability to reason about hardware specifications and generate targeted test plans remains largely unexplored. We present the first systematic study of LLM reasoning capabilities for RTL verification stimuli generation, establishing a two-stage framework that decomposes test plan generation from testbench execution. Our benchmark reveals that state-of-the-art models, including DeepSeek-R1 and Claude-4.0-Sonnet, achieve only 15.7-21.7% success rates on generating stimuli that pass golden RTL designs. To improve LLM generated stimuli, we develop a comprehensive training methodology combining supervised fine-tuning with a novel reinforcement learning approach, GRPO with State Mutation (GRPO-SMu), which enhances exploration by varying input mutations. Our approach leverages a tree-based branching mutation strategy to construct training data comprising equivalent and mutated trees, moving beyond linear mutation approaches to provide rich learning signals. Training on this curated dataset, our 7B parameter model achieves a 33.3% golden test pass rate and a 13.9% mutation detection rate, representing a 17.6% absolute improvement over baseline and outperforming much larger general-purpose models. These results demonstrate that specialized training methodologies can significantly enhance LLM reasoning capabilities for hardware verification tasks, establishing a foundation for automated sub-unit testing in semiconductor design workflows.
Problem

Research questions and friction points this paper is trying to address.

LLM
RTL verification
test plan generation
hardware testing
stimuli generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRPO-SMu
state mutation
RTL verification
test plan generation
tree-based mutation
πŸ”Ž Similar Papers
No similar papers found.
D
Dimple Vijay Kochar
Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA
Nathaniel Pinckney
Nathaniel Pinckney
NVIDIA
ASICVLSIHLS
G
Guan-Ting Liu
NVIDIA Research, Taiwan
C
Chia-Tung Ho
NVIDIA Research, Santa Clara, CA
Chenhui Deng
Chenhui Deng
NVIDIA Corporation
Graph LearningLLMChip Design
H
Haoxing Ren
NVIDIA Research, Austin, TX
Brucek Khailany
Brucek Khailany
Senior Director of VLSI Research, NVIDIA