🤖 AI Summary
Deep neural network (DNN) hardware accelerators face significant security risks from reverse engineering and IP theft. Method: This paper proposes an eFPGA-based redaction mechanism that selectively obfuscates critical compute modules pre-fabrication; authorized users dynamically restore full functionality at deployment via legitimate bitstream injection. Contribution/Results: We present the first end-to-end eFPGA redaction framework tailored for DNN accelerators—including architecture customization, sensitivity analysis, logic synthesis, place-and-route, and timing verification—and introduce fracturable LUTs to enable fine-grained, module-level redaction. Evaluation on representative DNN accelerators shows minimal overhead: <12% area, <8% delay, and <10% power increase. The approach substantially enhances IP resilience against reverse engineering and unauthorized reuse, establishing an efficient, controllable, hardware-enforced security paradigm for AI chips.
📝 Abstract
With the ever-increasing integration of artificial intelligence into daily life and the growing importance of well-trained models, the security of hardware accelerators supporting Deep Neural Networks (DNNs) has become paramount. As a promising solution to prevent hardware intellectual property theft, eFPGA redaction has emerged. This technique selectively conceals critical components of the design, allowing authorized users to restore functionality post-fabrication by inserting the correct bitstream. In this paper, we explore the redaction of DNN accelerators using eFPGAs, from specification to physical design implementation. Specifically, we investigate the selection of critical DNN modules for redaction using both regular and fracturable look-up tables. We perform synthesis, timing verification, and place&route on redacted DNN accelerators. Furthermore, we evaluate the overhead of incorporating eFPGAs into DNN accelerators in terms of power, area, and delay, finding it reasonable given the security benefits.