π€ AI Summary
This work proposes Sparse CLIP, a novel approach that addresses the limited interpretability of dense representations in existing CLIP models without compromising performance or multimodal alignment. By introducing end-to-end sparsity constraints during training and jointly optimizing contrastive learning with multimodal representation, Sparse CLIP achieves, for the first time, a synergistic improvement in both interpretability and downstream task performance. The method significantly enhances the sparsity and semantic interpretability of learned representations while maintaining or even improving accuracy on tasks such as image classification and cross-modal retrieval. Crucially, it preserves the modelβs ability to align semantics across modalities, demonstrating that sparsity and strong multimodal alignment can coexist effectively within a unified framework.
π Abstract
Contrastive Language-Image Pre-training (CLIP) has become a cornerstone in vision-language representation learning, powering diverse downstream tasks and serving as the default vision backbone in multimodal large language models (MLLMs). Despite its success, CLIP's dense and opaque latent representations pose significant interpretability challenges. A common assumption is that interpretability and performance are in tension: enforcing sparsity during training degrades accuracy, motivating recent post-hoc approaches such as Sparse Autoencoders (SAEs). However, these post-hoc approaches often suffer from degraded downstream performance and loss of CLIP's inherent multimodal capabilities, with most learned features remaining unimodal. We propose a simple yet effective approach that integrates sparsity directly into CLIP training, yielding representations that are both interpretable and performant. Compared to SAEs, our Sparse CLIP representations preserve strong downstream task performance, achieve superior interpretability, and retain multimodal capabilities. We show that multimodal sparse features enable straightforward semantic concept alignment and reveal training dynamics of how cross-modal knowledge emerges. Finally, as a proof of concept, we train a vision-language model on sparse CLIP representations that enables interpretable, vision-based steering capabilities. Our findings challenge conventional wisdom that interpretability requires sacrificing accuracy and demonstrate that interpretability and performance can be co-optimized, offering a promising design principle for future models.