🤖 AI Summary
Existing image compression methods struggle to jointly optimize for human perceptual quality and downstream machine vision performance, while relying on task-specific supervision. This paper proposes UG-ICM, a unified image coding framework that enables a single bitstream to serve both human perception and general-purpose machine analysis. Its core contributions are: (1) the first CLIP-driven global–instance multi-granularity semantic supervision, enabling fully unsupervised, label-free training; (2) a conditional variational decoding mechanism that dynamically generates distinct reconstructions tailored to human or machine preferences; and (3) hierarchical semantic modeling integrated with adaptive multi-objective optimization. Experiments demonstrate that UG-ICM significantly improves performance on unseen machine vision tasks—including classification, detection, and segmentation—while achieving state-of-the-art subjective quality. It is the first method to realize truly unified human–machine-optimal image coding.
📝 Abstract
The image compression model has long struggled with adaptability and generalization, as the decoded bitstream typically serves only human or machine needs and fails to preserve information for unseen visual tasks. Therefore, this paper innovatively introduces supervision obtained from multimodal pre-training models and incorporates adaptive multi-objective optimization tailored to support both human visual perception and machine vision simultaneously with a single bitstream, denoted as Unified and Generalized Image Coding for Machine (UG-ICM). Specifically, to get rid of the reliance between compression models with downstream task supervision, we introduce Contrastive Language-Image Pre-training (CLIP) models into the training constraint for improved generalization. Global-to-instance-wise CLIP supervision is applied to help obtain hierarchical semantics that make models more generalizable for the tasks relying on the information of different granularity. Furthermore, for supporting both human and machine visions with only a unifying bitstream, we incorporate a conditional decoding strategy that takes as conditions human or machine preferences, enabling the bitstream to be decoded into different versions for corresponding preferences. As such, our proposed UG-ICM is fully trained in a self-supervised manner, i.e., without awareness of any specific downstream models and tasks. The extensive experiments have shown that the proposed UG-ICM is capable of achieving remarkable improvements in various unseen machine analytics tasks, while simultaneously providing perceptually satisfying images.