🤖 AI Summary
Enterprises face a trade-off between intellectual property (IP) leakage and fine-tuning utility when adapting large language models (LLMs) using proprietary Verilog IP. Method: We systematically investigate post-fine-tuning IP extraction risks using LLaMA 3.1-8B, employing AST- and Dolos-based structural similarity analysis alongside Synopsys Formality for functional equivalence verification; we empirically evaluate ASSURE logic locking as a defense. Results: Unprotected fine-tuned models exhibit significant Verilog IP leakage; logic locking effectively mitigates leakage but degrades Verilog generation quality and diminishes IP-specific training gains. This work provides the first quantitative measurement of Verilog IP extractability from fine-tuned LLMs and establishes an empirical benchmark characterizing the IP-protection–performance trade-off, thereby offering a methodological foundation for secure, hardware-aware LLM fine-tuning.
📝 Abstract
Large language models (LLMs) offer significant potential for coding, yet fine-tuning (FT) with curated data is essential for niche languages like Verilog. Using proprietary intellectual property (IP) for FT presents a serious risk, as FT data can be leaked through LLM inference. This leads to a critical dilemma for design houses: seeking to build externally accessible LLMs offering competitive Verilog coding, how can they leverage in-house IP to enhance FT utility while ensuring IP protection? For the first time in the literature, we study this dilemma. Using LLaMA 3.1-8B, we conduct in-house FT on a baseline Verilog dataset (RTLCoder) supplemented with our own in-house IP, which is validated through multiple tape-outs. To rigorously assess IP leakage, we quantify structural similarity (AST/Dolos) and functional equivalence (Synopsys Formality) between generated codes and our in-house IP. We show that our IP can indeed be leaked, confirming the threat. As defense, we evaluate logic locking of Verilog codes (ASSURE). This offers some level of protection, yet reduces the IP's utility for FT and degrades the LLM's performance. Our study shows the need for novel strategies that are both effective and minimally disruptive to FT, an essential effort for enabling design houses to fully utilize their proprietary IP toward LLM-driven Verilog coding.