🤖 AI Summary
This work addresses the challenge of post-training quantization for large vision-language models (LVLMs) in multimodal reasoning. To this end, we propose an efficient quantization framework that explicitly models cross-layer dependencies. Our method introduces two key innovations: (i) the first use of activation entropy to characterize inter-layer dependencies for principled module partitioning; and (ii) a decoupling mechanism for the visual encoder that enables fine-grained, low-overhead decomposition of the search space. The resulting approach achieves an optimal trade-off between accuracy and efficiency: on the 13B LLaVA model, it delivers 2.78× memory compression and 1.44× generation speedup while preserving zero performance degradation across diverse multimodal understanding and generation benchmarks. These results significantly outperform state-of-the-art quantization methods, demonstrating both theoretical novelty and practical efficacy.
📝 Abstract
In this paper, we propose a post-training quantization framework of large vision-language models (LVLMs) for efficient multi-modal inference. Conventional quantization methods sequentially search the layer-wise rounding functions by minimizing activation discretization errors, which fails to acquire optimal quantization strategy without considering cross-layer dependency. On the contrary, we mine the cross-layer dependency that significantly influences discretization errors of the entire vision-language model, and embed this dependency into optimal quantization strategy searching with low search cost. Specifically, we observe the strong correlation between the activation entropy and the cross-layer dependency concerning output discretization errors. Therefore, we employ the entropy as the proxy to partition blocks optimally, which aims to achieve satisfying trade-offs between discretization errors and the search cost. Moreover, we optimize the visual encoder to disentangle the cross-layer dependency for fine-grained decomposition of search space, so that the search cost is further reduced without harming the quantization accuracy. Experimental results demonstrate that our method compresses the memory by 2.78x and increase generate speed by 1.44x about 13B LLaVA model without performance degradation on diverse multi-modal reasoning tasks. Code is available at https://github.com/ChangyuanWang17/QVLM.