Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to Adversarial Attacks

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying deep neural networks on edge devices requires ultra-low-bit quantization (e.g., ternary), yet such models often suffer from poor adversarial robustness under both white-box and black-box attacks. Method: This paper proposes a lightweight and robust deep quantized neural network, integrating Stochastic Ternary Quantization (STQ) with hierarchical Jacobian regularization within a quantization-aware training framework to jointly optimize model architecture and gradient sensitivity. Contribution/Results: Evaluated on CIFAR-10 and Speech Commands, the method achieves significant model compression while outperforming Quanos under white-box attacks and surpassing the MLCommons/TinyML benchmark under black-box attacks. It is the first approach to simultaneously enhance robustness against both attack types in ternary networks, establishing a new paradigm for secure AI deployment in resource-constrained edge environments.

Technology Category

Application Category

📝 Abstract
Reducing the memory footprint of Machine Learning (ML) models, especially Deep Neural Networks (DNNs), is imperative to facilitate their deployment on resource-constrained edge devices. However, a notable drawback of DNN models lies in their susceptibility to adversarial attacks, wherein minor input perturbations can deceive them. A primary challenge revolves around the development of accurate, resilient, and compact DNN models suitable for deployment on resource-constrained edge devices. This paper presents the outcomes of a compact DNN model that exhibits resilience against both black-box and white-box adversarial attacks. This work has achieved this resilience through training with the QKeras quantization-aware training framework. The study explores the potential of QKeras and an adversarial robustness technique, Jacobian Regularization (JR), to co-optimize the DNN architecture through per-layer JR methodology. As a result, this paper has devised a DNN model employing this co-optimization strategy based on Stochastic Ternary Quantization (STQ). Its performance was compared against existing DNN models in the face of various white-box and black-box attacks. The experimental findings revealed that, the proposed DNN model had small footprint and on average, it exhibited better performance than Quanos and DS-CNN MLCommons/TinyML (MLC/T) benchmarks when challenged with white-box and black-box attacks, respectively, on the CIFAR-10 image and Google Speech Commands audio datasets.
Problem

Research questions and friction points this paper is trying to address.

Develop compact DNN models for edge devices
Enhance DNN resilience to adversarial attacks
Optimize DNN using QKeras and Jacobian Regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

QKeras framework for quantization-aware training
Jacobian Regularization for adversarial robustness
Stochastic Ternary Quantization for compact DNN models
🔎 Similar Papers
No similar papers found.