🤖 AI Summary
Existing quantization methods for edge-device deployment require manual hyperparameter tuning to meet computational resource constraints, making it difficult to guarantee strict upper bounds on hardware costs.
Method: We propose a constraint-guided neural network quantization framework that jointly models resource constraints and end-to-end optimizes per-layer bitwidths under a given computational budget. Our approach employs differentiable bitwidth optimization and gradient reparameterization, enabling fully automatic, hyperparameter-free quantization that strictly satisfies pre-specified cost constraints. It supports quantization-aware training and automatically yields optimal mixed-precision configurations.
Contribution/Results: To the best of our knowledge, this is the first method ensuring hard constraint satisfaction without manual intervention. Evaluated on MNIST, it matches state-of-the-art accuracy while achieving 100% constraint compliance, significantly enhancing reliability and automation in edge deployment.
📝 Abstract
Deploying neural networks on the edge has become increasingly important as deep learning is being applied in an increasing amount of applications. The devices on the edge are typically characterised as having small computational resources as large computational resources results in a higher energy consumption, which is impractical for these devices. To reduce the complexity of neural networks a wide range of quantization methods have been proposed in recent years. This work proposes Constraint Guided Model Quantization (CGMQ), which is a quantization aware training algorithm that uses an upper bound on the computational resources and reduces the bit-widths of the parameters of the neural network. CGMQ does not require the tuning of a hyperparameter to result in a mixed precision neural network that satisfies the predefined computational cost constraint, while prior work does. It is shown on MNIST that the performance of CGMQ is competitive with state-of-the-art quantization aware training algorithms, while guaranteeing the satisfaction of the cost constraint.