đ¤ AI Summary
Contrastive vision-language models (e.g., CLIP) suffer from semantic misalignmentâwhere image regions mismatch short caption semanticsâand representation entanglementâwhere long-text alignment couples visual featuresâthereby limiting generalization in short-prompt downstream tasks. To address this, we propose a modular vision-language alignment framework grounded in a theoretically derived identifiability condition that ensures lossless cross-modal semantic preservation and visual representation disentanglement. Our approach introduces fine-grained text concept identification, dynamic image-text fragment alignment, and disentangled representation learning to enable multi-granularity semantic matching. Extensive experiments demonstrate significant improvements over strong baselines across diverse downstream tasksâincluding zero-shot classification, referring expression comprehension, and open-vocabulary detectionâwhile effectively mitigating semantic misalignment. Ablation studies validate the theoretical soundness of our identifiability condition and confirm the modelâs robust generalization capability under short-prompt settings.
đ Abstract
Contrastive Language-Image Pre-training (CLIP)~citep{radford2021learning} has emerged as a pivotal model in computer vision and multimodal learning, achieving state-of-the-art performance at aligning visual and textual representations through contrastive learning. However, CLIP struggles with potential information misalignment in many image-text datasets and suffers from entangled representation. On the one hand, short captions for a single image in datasets like MSCOCO may describe disjoint regions in the image, leaving the model uncertain about which visual features to retain or disregard. On the other hand, directly aligning long captions with images can lead to the retention of entangled details, preventing the model from learning disentangled, atomic concepts -- ultimately limiting its generalization on certain downstream tasks involving short prompts.
In this paper, we establish theoretical conditions that enable flexible alignment between textual and visual representations across varying levels of granularity. Specifically, our framework ensures that a model can not only emph{preserve} cross-modal semantic information in its entirety but also emph{disentangle} visual representations to capture fine-grained textual concepts. Building on this foundation, we introduce ours, a novel approach that identifies and aligns the most relevant visual and textual representations in a modular manner. Superior performance across various tasks demonstrates its capability to handle information misalignment and supports our identification theory. The code is available at https://github.com/Mid-Push/SmartCLIP.