🤖 AI Summary
To address the excessive visual token count in vision-language models (VLMs)—which leads to long context lengths, high computational overhead, and significant inference latency—this paper proposes a parameter-free, low-overhead frequency-domain compression method. Specifically, it introduces two-dimensional discrete cosine transform (2D-DCT) into visual token compression for the first time, leveraging the energy concentration of visual features in low-frequency components; low-pass filtering retains only essential low-frequency coefficients, drastically shortening the visual sequence length. Unlike existing approaches relying on learnable queries or importance sampling, our method requires no additional parameters or training and incurs negligible computational cost. Experiments on LLaVA and Qwen-VL demonstrate competitive performance against state-of-the-art methods, with an 83.8% reduction in inference FLOPs and a 31.2% increase in generation speed—achieving an optimal balance among efficiency, accuracy, and generalization.
📝 Abstract
Vision-Language Models (VLMs) typically replace the predefined image placeholder token (<image>) in textual instructions with visual features from an image encoder, forming the input to a backbone Large Language Model (LLM). However, the large number of vision tokens significantly increases the context length, leading to high computational overhead and inference latency. While previous efforts mitigate this by selecting only important visual features or leveraging learnable queries to reduce token count, they often compromise performance or introduce substantial extra costs. In response, we propose Fourier-VLM, a simple yet efficient method that compresses visual representations in the frequency domain. Our approach is motivated by the observation that vision features output from the vision encoder exhibit concentrated energy in low-frequency components. Leveraging this, we apply a low-pass filter to the vision features using a two-dimentional Discrete Cosine Transform (DCT). Notably, the DCT is efficiently computed via the Fast Fourier Transform (FFT) operator with a time complexity of $mathcal{O}(nlog n)$, minimizing the extra computational cost while introducing no additional parameters. Extensive experiments across various image-based benchmarks demonstrate that Fourier-VLM achieves competitive performance with strong generalizability across both LLaVA and Qwen-VL architectures. Crucially, it reduce inference FLOPs by up to 83.8% and boots generation speed by 31.2% compared to LLaVA-v1.5, highlighting the superior efficiency and practicality.