When MLLMs Meet Compression Distortion: A Coding Paradigm Tailored to MLLMs

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image codecs are optimized for the human visual system (HVS) and thus poorly suited to the diverse, semantic-driven requirements of multimodal large language models (MLLMs). To address this, we propose CoTAM—the first semantic-aware image coding framework specifically designed for MLLMs. We empirically reveal that compression distortion affects multi-level image features non-uniformly; leveraging this insight, CoTAM introduces a novel encoding paradigm wherein importance maps—generated from CLIP’s shallow-layer attention—are used to guide adaptive bit allocation. Additionally, a lightweight adapter decoder and a multi-level loss function jointly preserve both high-level semantics and fine-grained details. Extensive experiments demonstrate that CoTAM achieves up to 35.99% bitrate reduction over state-of-the-art neural codecs, while maintaining lossless performance across mainstream MLLM downstream tasks.

Technology Category

Application Category

📝 Abstract
The increasing deployment of powerful Multimodal Large Language Models (MLLMs), typically hosted on cloud platforms, urgently requires effective compression techniques to efficiently transmit signal inputs (e.g., images, videos) from edge devices with minimal bandwidth usage. However, conventional image codecs are optimized for fidelity to serve the Human Visual System (HVS) and ill-suited for MLLMs, in which diverse downstream tasks are jointly considered. In this paper, we first systematically analyze the impact of compression artifacts on several mainstream MLLMs. We find that: Compression distortion unevenly impacts different-level image features, leading to varying effects on MLLMs' downstream tasks depending on their feature-level reliance. Motivated by this discovery, we propose an image Codec TAilored to MLLMs (CoTAM) designed to adaptively protect multi-level features and suit different demands of downstream tasks. The encoder leverages CLIP's shallow-layer attention to generate an importance map for bit allocation, preserving critical semantic regions. Concurrently, the decoder integrates a lightweight adapter with a multi-level loss function to ensure the faithful reconstruction both of low-level details and high-level semantic context for robust synthesis of cross-level features. Extensive experiments validate that our method achieves up to 35.99% bitrate saving while maintaining the same performance on the MLLM tasks, outperforming previous SOTA neural codecs.
Problem

Research questions and friction points this paper is trying to address.

Optimizing image compression for MLLMs instead of human visual perception
Addressing uneven compression distortion effects on multimodal AI tasks
Developing adaptive feature protection for diverse downstream applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Image codec tailored to MLLMs with adaptive feature protection
Encoder uses CLIP attention for importance-based bit allocation
Decoder integrates lightweight adapter with multi-level loss function
Jinming Liu
Jinming Liu
Shanghai Jiao Tong Univeristy
VLMLLMComputer VisionImage/Video Compression
Zhaoyang Jia
Zhaoyang Jia
University of Science and Technology of China
Video compressiondigital watermarking
J
Jiahao Li
Microsoft Research Asia
B
Bin Li
Microsoft Research Asia
X
Xin Jin
Eastern Institute of Technology, Ningbo, China
W
Wenjun Zeng
Eastern Institute of Technology, Ningbo, China
Y
Yan Lu
Microsoft Research Asia