LLA: Enhancing Security and Privacy for Generative Models with Logic-Locked Accelerators

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address intellectual property (IP) security threats—including model theft, tampering, and information leakage—facing generative AI models in supply chains, this paper proposes a software–hardware co-designed, neuron-level protection framework. Methodologically, it introduces the first neuron-granular key embedding and anomaly-triggering mechanism, coupled with a lightweight, reusable hardware lock module that enables an “accelerator-as-license” authorization paradigm. To counteract key-optimization attacks, the framework integrates logic locking, invariance-preserving transformations, and an Oracle-guided attack model. Evaluation shows that, under a 7168-bit key, the framework incurs less than 0.1% computational overhead, induces negligible accuracy degradation, and maintains full compatibility with mainstream hardware toolchains and multi-datastream execution scenarios.

Technology Category

Application Category

📝 Abstract
We introduce LLA, an effective intellectual property (IP) protection scheme for generative AI models. LLA leverages the synergy between hardware and software to defend against various supply chain threats, including model theft, model corruption, and information leakage. On the software side, it embeds key bits into neurons that can trigger outliers to degrade performance and applies invariance transformations to obscure the key values. On the hardware side, it integrates a lightweight locking module into the AI accelerator while maintaining compatibility with various dataflow patterns and toolchains. An accelerator with a pre-stored secret key acts as a license to access the model services provided by the IP owner. The evaluation results show that LLA can withstand a broad range of oracle-guided key optimization attacks, while incurring a minimal computational overhead of less than 0.1% for 7,168 key bits.
Problem

Research questions and friction points this paper is trying to address.

Protects generative AI models from theft and corruption
Secures models against supply chain threats and leakage
Integrates hardware-software locking to prevent unauthorized access
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware-software synergy for AI model IP protection
Key bits embedded in neurons to degrade performance
Lightweight locking module integrated into AI accelerator
🔎 Similar Papers
No similar papers found.
Y
You Li
Northwestern University, Evanston, IL, USA
G
Guannan Zhao
Northwestern University, Evanston, IL, USA
Y
Yuhao Ju
Northwestern University, Evanston, IL, USA
Y
Yunqi He
Northwestern University, Evanston, IL, USA
J
Jie Gu
Northwestern University, Evanston, IL, USA
Hai Zhou
Hai Zhou
Northwestern University
EDAlogic lockinghardware securityphysical designVLSI