EIM-TRNG: Obfuscating Deep Neural Network Weights with Encoding-in-Memory True Random Number Generator via RowHammer

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural network (DNN) model weights are vulnerable to reverse engineering, tampering, and leakage, posing critical threats to AI intellectual property and system security. Method: This paper proposes a DRAM-based in-memory true random number generator (TRNG) framework leveraging the RowHammer effect—exploiting RowHammer-induced controllable bit flips as a high-entropy physical entropy source for hardware-level lightweight security. The approach integrates probabilistic key extraction, weight encryption encoding, and integrity verification into a unified “encode-as-memory” protection paradigm. Contribution/Results: Experimental evaluation demonstrates that the framework significantly enhances model resistance against reverse engineering and adversarial tampering while enabling trusted recovery—all with minimal hardware overhead. It introduces a novel, system-level security mechanism for protecting DNN model integrity and confidentiality, offering a practical pathway toward secure AI deployment and IP protection.

Technology Category

Application Category

📝 Abstract
True Random Number Generators (TRNGs) play a fundamental role in hardware security, cryptographic systems, and data protection. In the context of Deep NeuralNetworks (DNNs), safeguarding model parameters, particularly weights, is critical to ensure the integrity, privacy, and intel-lectual property of AI systems. While software-based pseudo-random number generators are widely used, they lack the unpredictability and resilience offered by hardware-based TRNGs. In this work, we propose a novel and robust Encoding-in-Memory TRNG called EIM-TRNG that leverages the inherent physical randomness in DRAM cell behavior, particularly under RowHammer-induced disturbances, for the first time. We demonstrate how the unpredictable bit-flips generated through carefully controlled RowHammer operations can be harnessed as a reliable entropy source. Furthermore, we apply this TRNG framework to secure DNN weight data by encoding via a combination of fixed and unpredictable bit-flips. The encrypted data is later decrypted using a key derived from the probabilistic flip behavior, ensuring both data confidentiality and model authenticity. Our results validate the effectiveness of DRAM-based entropy extraction for robust, low-cost hardware security and offer a promising direction for protecting machine learning models at the hardware level.
Problem

Research questions and friction points this paper is trying to address.

Securing DNN weights using hardware-based TRNG
Leveraging RowHammer-induced DRAM bit-flips for entropy
Ensuring DNN model confidentiality and authenticity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages DRAM cell randomness via RowHammer
Encodes DNN weights with unpredictable bit-flips
Decrypts data using probabilistic flip-derived key
🔎 Similar Papers
No similar papers found.
R
Ranyang Zhou
Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, USA
A
Abeer Matar A. Almalky
Department of Computer Science, State University of New York at Binghamton, NY, USA
G
Gamana Aragonda
Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, USA
Sabbir Ahmed
Sabbir Ahmed
Islamic University of Technology
Computer VisionDeep Learning
F
Filip Roth Trønnes-Christensen
Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, USA
A
Adnan Siraj Rakin
Department of Computer Science, State University of New York at Binghamton, NY, USA
Shaahin Angizi
Shaahin Angizi
Assistant Professor at New Jersey Institute of Technology
In-Memory ComputingIn-Sensor ComputingMemory SecurityAIDigital Design