SALAD: Systematic Assessment of Machine Unlearing on LLM-Aided Hardware Design

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data security risks—including Verilog benchmark contamination, intellectual property (IP) leakage, and malicious code generation—in large language model (LLM)-assisted hardware design, this paper proposes the first systematic machine unlearning evaluation framework tailored for hardware-design LLMs. Methodologically, it pioneers the integration of machine unlearning techniques into hardware security governance, combining gradient correction, influence function estimation, and trigger-based verification, augmented by a Verilog semantic-aware evaluation protocol. This enables precise, retraining-free removal of sensitive IP, contaminated benchmarks, and malicious code patterns. Evaluated across multiple hardware-design LLMs, the framework achieves over 90% forgetting rate for contaminated data, reduces IP leakage risk by 87%, and preserves over 95% of original Verilog generation functionality—demonstrating both efficacy and utility retention.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) offer transformative capabilities for hardware design automation, particularly in Verilog code generation. However, they also pose significant data security challenges, including Verilog evaluation data contamination, intellectual property (IP) design leakage, and the risk of malicious Verilog generation. We introduce SALAD, a comprehensive assessment that leverages machine unlearning to mitigate these threats. Our approach enables the selective removal of contaminated benchmarks, sensitive IP and design artifacts, or malicious code patterns from pre-trained LLMs, all without requiring full retraining. Through detailed case studies, we demonstrate how machine unlearning techniques effectively reduce data security risks in LLM-aided hardware design.
Problem

Research questions and friction points this paper is trying to address.

Mitigate Verilog evaluation data contamination in LLMs
Prevent intellectual property leakage in hardware design
Remove malicious code patterns from pre-trained LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Machine unlearning for Verilog contamination removal
Selective deletion of sensitive IP without retraining
Mitigating malicious code risks in LLM-aided design
🔎 Similar Papers
No similar papers found.