🤖 AI Summary
Deep neural network (DNN) model weights are vulnerable to reverse engineering, tampering, and leakage, posing critical threats to AI intellectual property and system security.
Method: This paper proposes a DRAM-based in-memory true random number generator (TRNG) framework leveraging the RowHammer effect—exploiting RowHammer-induced controllable bit flips as a high-entropy physical entropy source for hardware-level lightweight security. The approach integrates probabilistic key extraction, weight encryption encoding, and integrity verification into a unified “encode-as-memory” protection paradigm.
Contribution/Results: Experimental evaluation demonstrates that the framework significantly enhances model resistance against reverse engineering and adversarial tampering while enabling trusted recovery—all with minimal hardware overhead. It introduces a novel, system-level security mechanism for protecting DNN model integrity and confidentiality, offering a practical pathway toward secure AI deployment and IP protection.
📝 Abstract
True Random Number Generators (TRNGs) play a fundamental role in hardware security, cryptographic systems, and data protection. In the context of Deep NeuralNetworks (DNNs), safeguarding model parameters, particularly weights, is critical to ensure the integrity, privacy, and intel-lectual property of AI systems. While software-based pseudo-random number generators are widely used, they lack the unpredictability and resilience offered by hardware-based TRNGs. In this work, we propose a novel and robust Encoding-in-Memory TRNG called EIM-TRNG that leverages the inherent physical randomness in DRAM cell behavior, particularly under RowHammer-induced disturbances, for the first time. We demonstrate how the unpredictable bit-flips generated through carefully controlled RowHammer operations can be harnessed as a reliable entropy source. Furthermore, we apply this TRNG framework to secure DNN weight data by encoding via a combination of fixed and unpredictable bit-flips. The encrypted data is later decrypted using a key derived from the probabilistic flip behavior, ensuring both data confidentiality and model authenticity. Our results validate the effectiveness of DRAM-based entropy extraction for robust, low-cost hardware security and offer a promising direction for protecting machine learning models at the hardware level.