🤖 AI Summary
This work addresses the critical oversight in current large language models (LLMs), which predominantly focus on functional correctness while neglecting security vulnerabilities in hardware and firmware code generation. To bridge this gap, we present HardSecBench—the first systematic benchmark comprising 924 tasks spanning 76 hardware-related Common Weakness Enumeration (CWE) entries. We introduce a multi-agent automated evaluation framework that decouples code synthesis from verification, grounding security assessment in executable evidence through structured specifications, secure reference implementations, and executable tests. Empirical evaluation reveals that while mainstream LLMs often satisfy functional requirements, they consistently exhibit security flaws. Furthermore, our analysis demonstrates that prompting strategies significantly influence security outcomes, offering crucial insights for leveraging LLMs in safety-critical hardware design.
📝 Abstract
Large language models (LLMs) are being increasingly integrated into practical hardware and firmware development pipelines for code generation. Existing studies have primarily focused on evaluating the functional correctness of LLM-generated code, yet paid limited attention to its security issues. However, LLM-generated code that appears functionally sound may embed security flaws which could induce catastrophic damages after deployment. This critical research gap motivates us to design a benchmark for assessing security awareness under realistic specifications. In this work, we introduce HardSecBench, a benchmark with 924 tasks spanning Verilog Register Transfer Level (RTL) and firmware-level C, covering 76 hardware-relevant Common Weakness Enumeration (CWE) entries. Each task includes a structured specification, a secure reference implementation, and executable tests. To automate artifact synthesis, we propose a multi-agent pipeline that decouples synthesis from verification and grounds evaluation in execution evidence, enabling reliable evaluation. Using HardSecBench, we evaluate a range of LLMs on hardware and firmware code generation and find that models often satisfy functional requirements while still leaving security risks. We also find that security results vary with prompting. These findings highlight pressing challenges and offer actionable insights for future advancements in LLM-assisted hardware design. Our data and code will be released soon.