Revolutionizing Mixed Precision Quantization: Towards Training-free Automatic Proxy Discovery via Large Language Models

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mixed-precision quantization (MPQ) suffers from hand-crafted proxy design, high optimization costs, and poor flexibility. Method: We propose the first training-free, large language model (LLM)-based automatic proxy discovery framework for MPQ. Departing from conventional proxy modeling and gradient-based optimization, our approach leverages prompt engineering to drive direct policy optimization (DPO) and reinforcement learning, establishing a positive feedback loop between task performance and quantization policy search. Contribution/Results: To our knowledge, this is the first work to employ LLMs as zero-shot proxy generators, enabling fully automated, low-overhead MPQ policy search. Evaluated on mainstream benchmarks, our method achieves state-of-the-art accuracy while drastically reducing reliance on domain expertise and computational resources—advancing MPQ toward an efficient, general-purpose, and adaptive paradigm.

Technology Category

Application Category

📝 Abstract
Mixed-Precision Quantization (MPQ) liberates the Deep Neural Networks (DNNs) from the Out-Of-Memory (OOM) bottleneck, which garnered increasing research attention. However, conventional methods either searched from costly differentiable optimization, which is neither efficient nor flexible, or learned a quantized DNN from the proxy (i.e., HAWQ) manually designed by human experts, which is labor-intensive and requires huge expert knowledge. Can we design a proxy without involving any human experts and training? In this paper, we provide an affirmative answer by proposing a novel Large Language Models (LLMs)-driven Training-free Automatic Proxy (dubbed TAP) discovery framework, which reforms the design paradigm of MPQ by utilizing LLMs to find superior TAP tailored for MPQ, automatically. In addition, to bridge the gap between black-box LLMs and the tough MPQ task, we ingeniously propose simple Direct Policy Optimization (DPO) based reinforcement learning to enhance LLMs' reasoning by optimizing prompts, which can construct a positive feedback loop between the LLM and the MPQ task, enabling LLMs to generate better TAP in the next evolution. Extensive experiments on mainstream benchmarks demonstrate that TAP achieves state-of-the-art performance. Finally, we truly believe that our TAP will significantly contribute to the MPQ community by providing a new perspective on LLM-driven design algorithms.
Problem

Research questions and friction points this paper is trying to address.

Automatically discovers training-free proxies for mixed-precision quantization.
Replaces manual expert design with LLM-driven automatic proxy generation.
Bridges LLMs and quantization via reinforcement learning for enhanced reasoning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven automatic proxy discovery for quantization
Training-free framework using reinforcement learning for prompts
Direct Policy Optimization enhances LLM reasoning for MPQ
🔎 Similar Papers
No similar papers found.
Haidong Kang
Haidong Kang
Northeastern University
Machine LearningSelf-evolution AIDomain GeneralizationDiffusionLLM
J
Jun Du
School of Software Engineering, Beijing Jiaotong University
L
Lihong Lin
School of Software, Northeastern University