Constraint Guided Model Quantization of Neural Networks

📅 2024-09-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing quantization methods for edge-device deployment require manual hyperparameter tuning to meet computational resource constraints, making it difficult to guarantee strict upper bounds on hardware costs. Method: We propose a constraint-guided neural network quantization framework that jointly models resource constraints and end-to-end optimizes per-layer bitwidths under a given computational budget. Our approach employs differentiable bitwidth optimization and gradient reparameterization, enabling fully automatic, hyperparameter-free quantization that strictly satisfies pre-specified cost constraints. It supports quantization-aware training and automatically yields optimal mixed-precision configurations. Contribution/Results: To the best of our knowledge, this is the first method ensuring hard constraint satisfaction without manual intervention. Evaluated on MNIST, it matches state-of-the-art accuracy while achieving 100% constraint compliance, significantly enhancing reliability and automation in edge deployment.

Technology Category

Application Category

📝 Abstract
Deploying neural networks on the edge has become increasingly important as deep learning is being applied in an increasing amount of applications. The devices on the edge are typically characterised as having small computational resources as large computational resources results in a higher energy consumption, which is impractical for these devices. To reduce the complexity of neural networks a wide range of quantization methods have been proposed in recent years. This work proposes Constraint Guided Model Quantization (CGMQ), which is a quantization aware training algorithm that uses an upper bound on the computational resources and reduces the bit-widths of the parameters of the neural network. CGMQ does not require the tuning of a hyperparameter to result in a mixed precision neural network that satisfies the predefined computational cost constraint, while prior work does. It is shown on MNIST that the performance of CGMQ is competitive with state-of-the-art quantization aware training algorithms, while guaranteeing the satisfaction of the cost constraint.
Problem

Research questions and friction points this paper is trying to address.

Reducing neural network complexity for edge deployment
Achieving mixed precision quantization without hyperparameter tuning
Guaranteeing computational constraint satisfaction on edge hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constraint Guided Model Quantization algorithm
Reduces bit-widths under computational constraints
Generates mixed precision networks automatically
Q
Quinten Van Baelen
KU Leuven, Geel Campus, Dept. of Computer Science; Leuven.AI, B-2440 Geel, Belgium; Flanders Make@KU Leuven, Belgium
Peter Karsmakers
Peter Karsmakers
KU Leuven
machine learningdigital signalprocessingbiomedical technology