REDACTOR: eFPGA Redaction for DNN Accelerator Security

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural network (DNN) hardware accelerators face significant security risks from reverse engineering and IP theft. Method: This paper proposes an eFPGA-based redaction mechanism that selectively obfuscates critical compute modules pre-fabrication; authorized users dynamically restore full functionality at deployment via legitimate bitstream injection. Contribution/Results: We present the first end-to-end eFPGA redaction framework tailored for DNN accelerators—including architecture customization, sensitivity analysis, logic synthesis, place-and-route, and timing verification—and introduce fracturable LUTs to enable fine-grained, module-level redaction. Evaluation on representative DNN accelerators shows minimal overhead: <12% area, <8% delay, and <10% power increase. The approach substantially enhances IP resilience against reverse engineering and unauthorized reuse, establishing an efficient, controllable, hardware-enforced security paradigm for AI chips.

Technology Category

Application Category

📝 Abstract
With the ever-increasing integration of artificial intelligence into daily life and the growing importance of well-trained models, the security of hardware accelerators supporting Deep Neural Networks (DNNs) has become paramount. As a promising solution to prevent hardware intellectual property theft, eFPGA redaction has emerged. This technique selectively conceals critical components of the design, allowing authorized users to restore functionality post-fabrication by inserting the correct bitstream. In this paper, we explore the redaction of DNN accelerators using eFPGAs, from specification to physical design implementation. Specifically, we investigate the selection of critical DNN modules for redaction using both regular and fracturable look-up tables. We perform synthesis, timing verification, and place&route on redacted DNN accelerators. Furthermore, we evaluate the overhead of incorporating eFPGAs into DNN accelerators in terms of power, area, and delay, finding it reasonable given the security benefits.
Problem

Research questions and friction points this paper is trying to address.

eFPGA
Deep Learning Hardware Security
Power-Efficient Design
Innovation

Methods, ideas, or system contributions that make the work stand out.

REDACTOR
eFPGA Security
AI Task Safety
🔎 Similar Papers
No similar papers found.
Y
Yazan Baddour
Electrical Engineering, California State Uni. Long Beach, Long Beach, CA, USA
A
A. Hedayatipour
Electrical Engineering, California State Uni. Long Beach, Long Beach, CA, USA
Amin Rezaei
Amin Rezaei
Assistant Professor & Associate Chair, California State University, Long Beach
Hardware SecurityMachine LearningComputer Architecture