Diff-ICMH: Harmonizing Machine and Human Vision in Image Compression with Generative Prior

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image compression methods typically optimize for either human perception or machine analysis in isolation, failing to jointly satisfy both objectives. This paper proposes Diff-ICMH—the first generative image compression framework explicitly designed to co-optimize for both human visual quality and machine vision utility. Our key contributions are threefold: (1) We identify shared principles between semantic fidelity and perceptual realism across vision tasks, enabling the design of a semantic consistency loss (SC loss) and a label-guided module (TGM) that leverages image-level semantic labels to enhance semantic reconstruction; (2) We construct a unified encoder-decoder architecture grounded in diffusion priors, supporting multiple downstream AI tasks and high-fidelity subjective quality from a single bitstream and a single codec; (3) Extensive experiments demonstrate state-of-the-art performance—superior generalization on classification and detection tasks, alongside top-tier PSNR, MS-SSIM, and subjective quality scores.

Technology Category

Application Category

📝 Abstract
Image compression methods are usually optimized isolatedly for human perception or machine analysis tasks. We reveal fundamental commonalities between these objectives: preserving accurate semantic information is paramount, as it directly dictates the integrity of critical information for intelligent tasks and aids human understanding. Concurrently, enhanced perceptual quality not only improves visual appeal but also, by ensuring realistic image distributions, benefits semantic feature extraction for machine tasks. Based on this insight, we propose Diff-ICMH, a generative image compression framework aiming for harmonizing machine and human vision in image compression. It ensures perceptual realism by leveraging generative priors and simultaneously guarantees semantic fidelity through the incorporation of Semantic Consistency loss (SC loss) during training. Additionally, we introduce the Tag Guidance Module (TGM) that leverages highly semantic image-level tags to stimulate the pre-trained diffusion model's generative capabilities, requiring minimal additional bit rates. Consequently, Diff-ICMH supports multiple intelligent tasks through a single codec and bitstream without any task-specific adaptation, while preserving high-quality visual experience for human perception. Extensive experimental results demonstrate Diff-ICMH's superiority and generalizability across diverse tasks, while maintaining visual appeal for human perception. Code is available at: https://github.com/RuoyuFeng/Diff-ICMH.
Problem

Research questions and friction points this paper is trying to address.

Harmonizing machine analysis and human visual perception in image compression
Optimizing compression for both semantic fidelity and perceptual quality simultaneously
Enabling single codec support for multiple intelligent tasks without adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages generative priors for perceptual realism
Uses Semantic Consistency loss for semantic fidelity
Introduces Tag Guidance Module for enhanced generative capabilities
🔎 Similar Papers