🤖 AI Summary
Vision-language models like CLIP prioritize semantic understanding while tightly coupling perceptual features, hindering fine-grained separation of perception and semantics—critical for image quality assessment (IQA) and conditional image generation (CIG).
Method: We propose a language-guided visual perception disentanglement paradigm. First, we introduce I&2T—the first dual-text-annotated dataset for IQA/CIG, providing disentangled perceptual and semantic descriptions for each image. Building upon it, we design DeCLIP: a framework leveraging multimodal contrastive learning, dual-branch text supervision, and CLIP feature-space remapping to explicitly disentangle perceptual and semantic representations.
Contribution/Results: DeCLIP preserves CLIP’s zero-shot transfer capability while significantly improving technical and aesthetic quality estimation accuracy in IQA and enabling fine-grained perceptual controllability in CIG. It outperforms state-of-the-art methods across multiple benchmarks. Code, models, and the I&2T dataset are fully open-sourced.
📝 Abstract
Contrastive vision-language models, such as CLIP, have demonstrated excellent zero-shot capability across semantic recognition tasks, mainly attributed to the training on a large-scale I&1T (one Image with one Text) dataset. This kind of multimodal representations often blend semantic and perceptual elements, placing a particular emphasis on semantics. However, this could be problematic for popular tasks like image quality assessment (IQA) and conditional image generation (CIG), which typically need to have fine control on perceptual and semantic features. Motivated by the above facts, this paper presents a new multimodal disentangled representation learning framework, which leverages disentangled text to guide image disentanglement. To this end, we first build an I&2T (one Image with a perceptual Text and a semantic Text) dataset, which consists of disentangled perceptual and semantic text descriptions for an image. Then, the disentangled text descriptions are utilized as supervisory signals to disentangle pure perceptual representations from CLIP's original `coarse' feature space, dubbed DeCLIP. Finally, the decoupled feature representations are used for both image quality assessment (technical quality and aesthetic quality) and conditional image generation. Extensive experiments and comparisons have demonstrated the advantages of the proposed method on the two popular tasks. The dataset, code, and model will be available.