Benchmarking and Enhancing VLM for Compressed Image Understanding

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) suffer significant performance degradation on low-bitrate compressed images, yet the underlying causes and effective mitigation strategies remain poorly understood. To address this, we introduce the first comprehensive benchmark for evaluating VLMs on compressed images—comprising over one million samples spanning multiple codecs (JPEG, WebP, AVIF), diverse bitrates, and heterogeneous downstream tasks. Through systematic analysis, we identify generalization failure—not inherent information loss—as the primary driver of performance decline. Building on this insight, we propose a lightweight, plug-and-play, fine-tuning-free feature alignment adapter that ensures cross-codec and cross-bitrate compatibility. Extensive experiments demonstrate that a single adapter boosts zero-shot VLM comprehension by 10–30% across multiple tasks, substantially narrowing the performance gap between compressed and original images.

Technology Category

Application Category

📝 Abstract
With the rapid development of Vision-Language Models (VLMs) and the growing demand for their applications, efficient compression of the image inputs has become increasingly important. Existing VLMs predominantly digest and understand high-bitrate compressed images, while their ability to interpret low-bitrate compressed images has yet to be explored by far. In this paper, we introduce the first comprehensive benchmark to evaluate the ability of VLM against compressed images, varying existing widely used image codecs and diverse set of tasks, encompassing over one million compressed images in our benchmark. Next, we analyse the source of performance gap, by categorising the gap from a) the information loss during compression and b) generalisation failure of VLM. We visualize these gaps with concrete examples and identify that for compressed images, only the generalization gap can be mitigated. Finally, we propose a universal VLM adaptor to enhance model performance on images compressed by existing codecs. Consequently, we demonstrate that a single adaptor can improve VLM performance across images with varying codecs and bitrates by 10%-30%. We believe that our benchmark and enhancement method provide valuable insights and contribute toward bridging the gap between VLMs and compressed images.
Problem

Research questions and friction points this paper is trying to address.

Evaluates VLM performance on compressed images
Analyzes performance gaps from compression and generalization
Proposes adaptor to enhance VLM for compressed images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive benchmark for VLM compressed image evaluation
Universal adaptor enhances VLM performance across codecs
Mitigates generalization gap in low-bitrate compressed images
🔎 Similar Papers
No similar papers found.
Z
Zifu Zhang
Institute for AI Industry Research, Tsinghua University
Tongda Xu
Tongda Xu
Phd candidate, Tsinghua University
image & video compressionperceptual quality老北京 & 网吧大神
S
Siqi Li
Beijing University of Technology
S
Shengxi Li
Beihang University
Y
Yue Zhang
Beihang University
Mai Xu
Mai Xu
Beihang Univeristy, Tsinghua Univeristy, Imperial College London
Y
Yan Wang
Institute for AI Industry Research, Tsinghua University