🤖 AI Summary
Mixed-precision quantization (MPQ) suffers from hand-crafted proxy design, high optimization costs, and poor flexibility. Method: We propose the first training-free, large language model (LLM)-based automatic proxy discovery framework for MPQ. Departing from conventional proxy modeling and gradient-based optimization, our approach leverages prompt engineering to drive direct policy optimization (DPO) and reinforcement learning, establishing a positive feedback loop between task performance and quantization policy search. Contribution/Results: To our knowledge, this is the first work to employ LLMs as zero-shot proxy generators, enabling fully automated, low-overhead MPQ policy search. Evaluated on mainstream benchmarks, our method achieves state-of-the-art accuracy while drastically reducing reliance on domain expertise and computational resources—advancing MPQ toward an efficient, general-purpose, and adaptive paradigm.
📝 Abstract
Mixed-Precision Quantization (MPQ) liberates the Deep Neural Networks (DNNs) from the Out-Of-Memory (OOM) bottleneck, which garnered increasing research attention. However, conventional methods either searched from costly differentiable optimization, which is neither efficient nor flexible, or learned a quantized DNN from the proxy (i.e., HAWQ) manually designed by human experts, which is labor-intensive and requires huge expert knowledge. Can we design a proxy without involving any human experts and training? In this paper, we provide an affirmative answer by proposing a novel Large Language Models (LLMs)-driven Training-free Automatic Proxy (dubbed TAP) discovery framework, which reforms the design paradigm of MPQ by utilizing LLMs to find superior TAP tailored for MPQ, automatically. In addition, to bridge the gap between black-box LLMs and the tough MPQ task, we ingeniously propose simple Direct Policy Optimization (DPO) based reinforcement learning to enhance LLMs' reasoning by optimizing prompts, which can construct a positive feedback loop between the LLM and the MPQ task, enabling LLMs to generate better TAP in the next evolution. Extensive experiments on mainstream benchmarks demonstrate that TAP achieves state-of-the-art performance. Finally, we truly believe that our TAP will significantly contribute to the MPQ community by providing a new perspective on LLM-driven design algorithms.