🤖 AI Summary
This work addresses the substantial computational overhead incurred by high-resolution document and GUI images in vision-language models, which generate excessive redundant visual tokens. The authors propose a training-free, parameterless preprocessing method that operates in pixel space prior to ViT encoding, leveraging predictive coding to adaptively identify and prune duplicate image patches, thereby achieving lossless or controllably lossy visual token compression. Notably, this approach is the first to exploit patch-level redundancy at the pixel level for inference acceleration, enabling end-to-end speedup of both the ViT encoder and downstream large language models. Experiments across diverse model scales and benchmarks demonstrate up to 4.2× faster inference and 1.9× faster training while maintaining competitive task accuracy.
📝 Abstract
Document understanding and GUI interaction are among the highest-value applications of Vision-Language Models (VLMs), yet they impose exceptionally heavy computational burden: fine-grained text and small UI elements demand high-resolution inputs that produce tens of thousands of visual tokens. We observe that this cost is largely wasteful -- across document and GUI benchmarks, only 22--71\% of image patches are pixel-unique, the rest being exact duplicates of another patch in the same image. We propose \textbf{PixelPrune}, which exploits this pixel-level redundancy through predictive-coding-based compression, pruning redundant patches \emph{before} the Vision Transformer (ViT) encoder. Because it operates in pixel space prior to any neural computation, PixelPrune accelerates both the ViT encoder and the downstream LLM, covering the full inference pipeline. The method is training-free, requires no learnable parameters, and supports pixel-lossless compression ($τ{=}0$) as well as controlled lossy compression ($τ{>}0$). Experiments across three model scales and document and GUI benchmarks show that PixelPrune maintains competitive task accuracy while delivering up to 4.2$\times$ inference speedup and 1.9$\times$ training acceleration. Code is available at https://github.com/OPPO-Mente-Lab/PixelPrune.