π€ AI Summary
This work addresses the imbalance between discriminative power and fine-grained detail perception in CLIPβs visual encoder, which limits its performance on downstream tasks. To reconcile these competing objectives, we propose Diffusion Contrastive Reconstruction (DCR), a framework that integrates contrastive signals into the diffusion-based image reconstruction process to jointly optimize both capabilities. A key innovation of DCR lies in deriving the contrastive signal from the reconstructed image at each diffusion step rather than from the original input, thereby circumventing gradient conflicts and enabling unified optimization under a single objective. Extensive experiments demonstrate that DCR substantially enhances the quality of visual representations and consistently improves performance across multiple benchmarks as well as when integrated into multimodal large language models.
π Abstract
The limited understanding capacity of the visual encoder in Contrastive Language-Image Pre-training (CLIP) has become a key bottleneck for downstream performance. This capacity includes both Discriminative Ability (D-Ability), which reflects class separability, and Detail Perceptual Ability (P-Ability), which focuses on fine-grained visual cues. Recent solutions use diffusion models to enhance representations by conditioning image reconstruction on CLIP visual tokens. We argue that such paradigms may compromise D-Ability and therefore fail to effectively address CLIP's representation limitations. To address this, we integrate contrastive signals into diffusion-based reconstruction to pursue more comprehensive visual representations. We begin with a straightforward design that augments the diffusion process with contrastive learning on input images. However, empirical results show that the naive combination suffers from gradient conflict and yields suboptimal performance. To balance the optimization, we introduce the Diffusion Contrastive Reconstruction (DCR), which unifies the learning objective. The key idea is to inject contrastive signals derived from each reconstructed image, rather than from the original input, into the diffusion process. Our theoretical analysis shows that the DCR loss can jointly optimize D-Ability and P-Ability. Extensive experiments across various benchmarks and multi-modal large language models validate the effectiveness of our method. The code is available at https://github.com/boyuh/DCR.