HardSecBench: Benchmarking the Security Awareness of LLMs for Hardware Code Generation

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical oversight in current large language models (LLMs), which predominantly focus on functional correctness while neglecting security vulnerabilities in hardware and firmware code generation. To bridge this gap, we present HardSecBench—the first systematic benchmark comprising 924 tasks spanning 76 hardware-related Common Weakness Enumeration (CWE) entries. We introduce a multi-agent automated evaluation framework that decouples code synthesis from verification, grounding security assessment in executable evidence through structured specifications, secure reference implementations, and executable tests. Empirical evaluation reveals that while mainstream LLMs often satisfy functional requirements, they consistently exhibit security flaws. Furthermore, our analysis demonstrates that prompting strategies significantly influence security outcomes, offering crucial insights for leveraging LLMs in safety-critical hardware design.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are being increasingly integrated into practical hardware and firmware development pipelines for code generation. Existing studies have primarily focused on evaluating the functional correctness of LLM-generated code, yet paid limited attention to its security issues. However, LLM-generated code that appears functionally sound may embed security flaws which could induce catastrophic damages after deployment. This critical research gap motivates us to design a benchmark for assessing security awareness under realistic specifications. In this work, we introduce HardSecBench, a benchmark with 924 tasks spanning Verilog Register Transfer Level (RTL) and firmware-level C, covering 76 hardware-relevant Common Weakness Enumeration (CWE) entries. Each task includes a structured specification, a secure reference implementation, and executable tests. To automate artifact synthesis, we propose a multi-agent pipeline that decouples synthesis from verification and grounds evaluation in execution evidence, enabling reliable evaluation. Using HardSecBench, we evaluate a range of LLMs on hardware and firmware code generation and find that models often satisfy functional requirements while still leaving security risks. We also find that security results vary with prompting. These findings highlight pressing challenges and offer actionable insights for future advancements in LLM-assisted hardware design. Our data and code will be released soon.
Problem

Research questions and friction points this paper is trying to address.

LLM security
hardware code generation
security benchmark
CWE
Verilog RTL
Innovation

Methods, ideas, or system contributions that make the work stand out.

HardSecBench
security-aware code generation
hardware LLM benchmarking
multi-agent synthesis pipeline
CWE-aware evaluation
Qirui Chen
Qirui Chen
Shanghai Jiao Tong University
J
Jingxian Shuai
University of Science and Technology of China, China
S
Shuangwu Chen
University of Science and Technology of China, China
S
Shenghao Ye
University of Science and Technology of China, China
Z
Zijian Wen
University of Science and Technology of China, China
X
Xufei Su
Xi’an Jiaotong University, China
J
Jie Jin
Nanjing University, China
J
Jiangming Li
ZTE Corporation, China
J
Jun Chen
ZTE Corporation, China
X
Xiaobin Tan
University of Science and Technology of China, China
J
Jian Yang
University of Science and Technology of China, China