🤖 AI Summary
Large language models (LLMs) often generate Verilog finite-state machine (FSM) code containing security vulnerabilities, posing risks in safety-critical SoC designs. Method: This paper proposes a knowledge graph–guided LLM generation framework. Its core innovation is the construction of FSKG—the first domain-specific knowledge graph for FSM security—enabling systematic vulnerability pattern recognition and requirement-to-risk mapping to generate structured, security-aware prompts. The method integrates knowledge graph retrieval, vulnerability-driven prompt engineering, and collaborative LLM optimization to close the loop from natural-language specifications to secure hardware code. Contribution/Results: Evaluated on 25 security-critical test cases, the approach achieves a 21/25 pass rate, significantly outperforming existing state-of-the-art baselines. It establishes a verifiable, interpretable paradigm for automated, security-assured FSM synthesis in SoC design.
📝 Abstract
Finite State Machines (FSMs) play a critical role in implementing control logic for Systems-on-Chip (SoC). Traditionally, FSMs are implemented by hardware engineers through Verilog coding, which is often tedious and time-consuming. Recently, with the remarkable progress of Large Language Models (LLMs) in code generation, LLMs have been increasingly explored for automating Verilog code generation. However, LLM-generated Verilog code often suffers from security vulnerabilities, which is particularly concerning for security-sensitive FSM implementations. To address this issue, we propose SecFSM, a novel method that leverages a security-oriented knowledge graph to guide LLMs in generating more secure Verilog code. Specifically, we first construct a FSM Security Knowledge Graph (FSKG) as an external aid to LLMs. Subsequently, we analyze users' requirements to identify vulnerabilities and get a list of vulnerabilities in the requirements. Then, we retrieve knowledge from FSKG based on the vulnerabilities list. Finally, we construct security prompts based on the security knowledge for Verilog code generation. To evaluate SecFSM, we build a dedicated dataset collected from academic datasets, artificial datasets, papers, and industrial cases. Extensive experiments demonstrate that SecFSM outperforms state-of-the-art baselines. In particular, on a benchmark of 25 security test cases evaluated by DeepSeek-R1, SecFSM achieves an outstanding pass rate of 21/25.