🤖 AI Summary
Current vision-language models (VLMs) suffer significant performance degradation on low-bitrate compressed images, yet the underlying causes and effective mitigation strategies remain poorly understood. To address this, we introduce the first comprehensive benchmark for evaluating VLMs on compressed images—comprising over one million samples spanning multiple codecs (JPEG, WebP, AVIF), diverse bitrates, and heterogeneous downstream tasks. Through systematic analysis, we identify generalization failure—not inherent information loss—as the primary driver of performance decline. Building on this insight, we propose a lightweight, plug-and-play, fine-tuning-free feature alignment adapter that ensures cross-codec and cross-bitrate compatibility. Extensive experiments demonstrate that a single adapter boosts zero-shot VLM comprehension by 10–30% across multiple tasks, substantially narrowing the performance gap between compressed and original images.
📝 Abstract
With the rapid development of Vision-Language Models (VLMs) and the growing demand for their applications, efficient compression of the image inputs has become increasingly important. Existing VLMs predominantly digest and understand high-bitrate compressed images, while their ability to interpret low-bitrate compressed images has yet to be explored by far. In this paper, we introduce the first comprehensive benchmark to evaluate the ability of VLM against compressed images, varying existing widely used image codecs and diverse set of tasks, encompassing over one million compressed images in our benchmark. Next, we analyse the source of performance gap, by categorising the gap from a) the information loss during compression and b) generalisation failure of VLM. We visualize these gaps with concrete examples and identify that for compressed images, only the generalization gap can be mitigated. Finally, we propose a universal VLM adaptor to enhance model performance on images compressed by existing codecs. Consequently, we demonstrate that a single adaptor can improve VLM performance across images with varying codecs and bitrates by 10%-30%. We believe that our benchmark and enhancement method provide valuable insights and contribute toward bridging the gap between VLMs and compressed images.