๐ค AI Summary
Large Vision-Language Models (LVLMs) suffer from prohibitive computational overhead when processing high-resolution images, as FLOPs scale quadratically with resolution. This work identifies a novel empirical regularity: visual token redundancy increases with network depth. To address this, we propose a stage-wise progressive visual token pruning methodโpreserving full-resolution visual representations in shallow layers while dynamically discarding redundant tokens in deeper layers via lightweight similarity-based metrics, under a pyramid-structured representation. The approach requires no fine-tuning and enables plug-and-play, zero-training inference acceleration. Evaluated on LLaVA-NeXT, our method reduces training time by 40% and inference FLOPs by 55%, with negligible performance degradation. It significantly outperforms existing token compression techniques in both efficiency and accuracy, offering a practical, architecture-agnostic solution for scalable LVLM inference.
๐ Abstract
In large vision-language models (LVLMs), images serve as inputs that carry a wealth of information. As the idiom"A picture is worth a thousand words"implies, representing a single image in current LVLMs can require hundreds or even thousands of tokens. This results in significant computational costs, which grow quadratically as input image resolution increases, thereby severely impacting the efficiency of both training and inference. Previous approaches have attempted to reduce the number of image tokens either before or within the early layers of LVLMs. However, these strategies inevitably result in the loss of crucial image information, ultimately diminishing model performance. To address this challenge, we conduct an empirical study revealing that all visual tokens are necessary for LVLMs in the shallow layers, and token redundancy progressively increases in the deeper layers of the model. To this end, we propose PyramidDrop, a visual redundancy reduction strategy for LVLMs to boost their efficiency in both training and inference with neglectable performance loss. Specifically, we partition the LVLM into several stages and drop part of the image tokens at the end of each stage with a pre-defined ratio, creating pyramid-like visual tokens across model layers. The dropping is based on a lightweight similarity calculation with a negligible time overhead. Extensive experiments demonstrate that PyramidDrop can achieve a 40% training time and 55% inference FLOPs acceleration of LLaVA-NeXT with comparable performance. Besides, the PyramidDrop could also serve as a plug-and-play strategy for inference acceleration without training, with better performance and lower inference cost than counterparts. Code is available at https://github.com/Cooperx521/PyramidDrop.