CLAP: Isolating Content from Style Through Contrastive Learning with Augmented Prompts

📅 2023-11-28
🏛️ European Conference on Computer Vision
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing contrastive vision-language models (e.g., CLIP) suffer from entangled content and style representations, limiting their out-of-distribution generalization. To address this, we propose a causal generative-driven feature disentanglement framework that—uniquely—integrates textual augmentations (e.g., synonym substitution, logical restructuring) into contrastive learning, jointly with image augmentations (e.g., cropping, color jitter). These multimodal perturbations guide the CLIP encoder to attend more selectively to semantic content. We design a plug-and-play augmentation prompting mechanism that achieves content–style separation without modifying the pre-trained model architecture. Experiments demonstrate significant improvements in zero-shot and few-shot classification accuracy across diverse benchmarks, alongside enhanced robustness to noise, occlusion, and style shifts. Our approach establishes a novel paradigm for multimodal representation disentanglement, advancing both interpretability and generalization in vision-language modeling.
📝 Abstract
Contrastive vision-language models, such as CLIP, have garnered considerable attention for various downstream tasks, mainly due to the remarkable generalization ability of the learned features. However, the features they learn often blend content and style information, which somewhat limits their generalization capabilities under distribution shifts. To address this limitation, we adopt a causal generative perspective for multimodal data and propose contrastive learning with data augmentation to disentangle content features from the original representations. To achieve this, we begin by exploring image augmentation techniques and develop a method to seamlessly integrate them into pre-trained CLIP-like models to extract pure content features. Taking a step further, and recognizing the inherent semantic richness and logical structure of text data, we explore the use of text augmentation to isolate latent content from style features. This enables CLIP-like models' encoders to concentrate on latent content information, refining the representations learned by pre-trained CLIP-like models. Our extensive experiments across diverse datasets demonstrate significant improvements in zero-shot and few-shot classification tasks, alongside enhanced robustness to various perturbations. These results underscore the effectiveness of our proposed methods in refining vision-language representations and advancing the state of the art in multimodal learning.
Problem

Research questions and friction points this paper is trying to address.

Separate content from style in vision-language models
Enhance generalization using contrastive learning with data augmentation
Improve zero-shot and few-shot classification robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive learning with data augmentation
Image augmentation for pure content features
Text augmentation to isolate content
🔎 Similar Papers
No similar papers found.
Y
Yichao Cai
Australian Institute for Machine Learning, University of Adelaide, SA 5000, Australia
Yuhang Liu
Yuhang Liu
The University of Adelaide
Representation LearningLLMsLatent Variable ModelsResponsible AI
Z
Zhen Zhang
Australian Institute for Machine Learning, University of Adelaide, SA 5000, Australia
J
Javen Qinfeng Shi
Australian Institute for Machine Learning, University of Adelaide, SA 5000, Australia