Unified Coding for Both Human Perception and Generalized Machine Analytics with CLIP Supervision

📅 2025-01-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image compression methods struggle to jointly optimize for human perceptual quality and downstream machine vision performance, while relying on task-specific supervision. This paper proposes UG-ICM, a unified image coding framework that enables a single bitstream to serve both human perception and general-purpose machine analysis. Its core contributions are: (1) the first CLIP-driven global–instance multi-granularity semantic supervision, enabling fully unsupervised, label-free training; (2) a conditional variational decoding mechanism that dynamically generates distinct reconstructions tailored to human or machine preferences; and (3) hierarchical semantic modeling integrated with adaptive multi-objective optimization. Experiments demonstrate that UG-ICM significantly improves performance on unseen machine vision tasks—including classification, detection, and segmentation—while achieving state-of-the-art subjective quality. It is the first method to realize truly unified human–machine-optimal image coding.

Technology Category

Application Category

📝 Abstract
The image compression model has long struggled with adaptability and generalization, as the decoded bitstream typically serves only human or machine needs and fails to preserve information for unseen visual tasks. Therefore, this paper innovatively introduces supervision obtained from multimodal pre-training models and incorporates adaptive multi-objective optimization tailored to support both human visual perception and machine vision simultaneously with a single bitstream, denoted as Unified and Generalized Image Coding for Machine (UG-ICM). Specifically, to get rid of the reliance between compression models with downstream task supervision, we introduce Contrastive Language-Image Pre-training (CLIP) models into the training constraint for improved generalization. Global-to-instance-wise CLIP supervision is applied to help obtain hierarchical semantics that make models more generalizable for the tasks relying on the information of different granularity. Furthermore, for supporting both human and machine visions with only a unifying bitstream, we incorporate a conditional decoding strategy that takes as conditions human or machine preferences, enabling the bitstream to be decoded into different versions for corresponding preferences. As such, our proposed UG-ICM is fully trained in a self-supervised manner, i.e., without awareness of any specific downstream models and tasks. The extensive experiments have shown that the proposed UG-ICM is capable of achieving remarkable improvements in various unseen machine analytics tasks, while simultaneously providing perceptually satisfying images.
Problem

Research questions and friction points this paper is trying to address.

Image Compression
Adaptability to New Tasks
Universal Encoding for Human and Machine Perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

UG-ICM
Multimodal Pre-training
Adaptive Image Compression
K
Kangsheng Yin
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University
Q
Quan Liu
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University
Xuelin Shen
Xuelin Shen
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
Y
Yulin He
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
Wenhan Yang
Wenhan Yang
P.hD. student of Computer Science, University of California, Los Angeles
Self-supervised LearningModel Robustness
S
Shiqi Wang
City University of Hong Kong