Learning from Loss Landscape: Generalizable Mixed-Precision Quantization via Adaptive Sharpness-Aware Gradient Aligning

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitive computational cost of mixed-precision quantization (MPQ) strategy search on large-scale datasets, this paper proposes a cross-dataset generalizable MPQ paradigm: efficiently searching quantization policies on small-scale datasets (e.g., CIFAR-10) and transferring them to large-scale ones (e.g., ImageNet). The core contribution is the first loss-landscape-aware generalizable MPQ framework, integrating three key techniques—sharpness-aware minimization (SAM), implicit gradient direction alignment, and adaptive perturbation radius—to mitigate multi-objective gradient conflicts and enhance policy transfer robustness. Experiments demonstrate that searching with only 0.5% of ImageNet achieves comparable accuracy to full-dataset search, while significantly reducing computational overhead. The proposed method improves search efficiency by up to 150% over baseline approaches, establishing a new trade-off between search cost and quantization performance.

Technology Category

Application Category

📝 Abstract
Mixed Precision Quantization (MPQ) has become an essential technique for optimizing neural network by determining the optimal bitwidth per layer. Existing MPQ methods, however, face a major hurdle: they require a computationally expensive search for quantization policies on large-scale datasets. To resolve this issue, we introduce a novel approach that first searches for quantization policies on small datasets and then generalizes them to large-scale datasets. This approach simplifies the process, eliminating the need for large-scale quantization fine-tuning and only necessitating model weight adjustment. Our method is characterized by three key techniques: sharpness-aware minimization for enhanced quantization generalization, implicit gradient direction alignment to handle gradient conflicts among different optimization objectives, and an adaptive perturbation radius to accelerate optimization. Both theoretical analysis and experimental results validate our approach. Using the CIFAR10 dataset (just 0.5% the size of ImageNet training data) for MPQ policy search, we achieved equivalent accuracy on ImageNet with a significantly lower computational cost, while improving efficiency by up to 150% over the baselines.
Problem

Research questions and friction points this paper is trying to address.

Optimizing neural networks via efficient mixed-precision quantization policy search
Reducing computational cost by generalizing small-dataset policies to large datasets
Enhancing quantization generalization with sharpness-aware and gradient-aligning techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sharpness-aware minimization enhances quantization generalization
Implicit gradient alignment resolves optimization conflicts
Adaptive perturbation radius accelerates optimization process
🔎 Similar Papers
No similar papers found.
Lianbo Ma
Lianbo Ma
Northeastern University
Multi-objective OptimizationNeural Architecture SearchMachine LearningEdge Computing
J
Jianlun Ma
College of Software, Northeastern University, Shenyang, China
Y
Yuee Zhou
College of Software, Northeastern University, Shenyang, China
Guoyang Xie
Guoyang Xie
Algorithm Manager, Department of Intelligent Manufacturing, CATL
AI in Industrial ManufacturingRoboticsAnomaly Detection
Q
Qiang He
College of Computer Science and Engineering, Northeastern University, Shenyang, China
Zhichao Lu
Zhichao Lu
City University of Hong Kong
Evolutionary ComputationBilevel OptimizationNeural Architecture Search