🤖 AI Summary
Existing image codecs are optimized for the human visual system (HVS) and thus poorly suited to the diverse, semantic-driven requirements of multimodal large language models (MLLMs). To address this, we propose CoTAM—the first semantic-aware image coding framework specifically designed for MLLMs. We empirically reveal that compression distortion affects multi-level image features non-uniformly; leveraging this insight, CoTAM introduces a novel encoding paradigm wherein importance maps—generated from CLIP’s shallow-layer attention—are used to guide adaptive bit allocation. Additionally, a lightweight adapter decoder and a multi-level loss function jointly preserve both high-level semantics and fine-grained details. Extensive experiments demonstrate that CoTAM achieves up to 35.99% bitrate reduction over state-of-the-art neural codecs, while maintaining lossless performance across mainstream MLLM downstream tasks.
📝 Abstract
The increasing deployment of powerful Multimodal Large Language Models (MLLMs), typically hosted on cloud platforms, urgently requires effective compression techniques to efficiently transmit signal inputs (e.g., images, videos) from edge devices with minimal bandwidth usage. However, conventional image codecs are optimized for fidelity to serve the Human Visual System (HVS) and ill-suited for MLLMs, in which diverse downstream tasks are jointly considered. In this paper, we first systematically analyze the impact of compression artifacts on several mainstream MLLMs. We find that: Compression distortion unevenly impacts different-level image features, leading to varying effects on MLLMs' downstream tasks depending on their feature-level reliance. Motivated by this discovery, we propose an image Codec TAilored to MLLMs (CoTAM) designed to adaptively protect multi-level features and suit different demands of downstream tasks. The encoder leverages CLIP's shallow-layer attention to generate an importance map for bit allocation, preserving critical semantic regions. Concurrently, the decoder integrates a lightweight adapter with a multi-level loss function to ensure the faithful reconstruction both of low-level details and high-level semantic context for robust synthesis of cross-level features. Extensive experiments validate that our method achieves up to 35.99% bitrate saving while maintaining the same performance on the MLLM tasks, outperforming previous SOTA neural codecs.