๐ค AI Summary
Existing contrastive vision-language pretraining models (e.g., CLIP) struggle to model fine-grained compositional semantics across image regions. To address this, we propose PowerCLIP, which introduces a powerset alignment mechanism to enable explicit, fine-grained semantic matching between image regions and text phrases. To circumvent the exponential computational complexity of exact powerset computation, we design a Non-Linear Aggregator (NLA) that approximates the precise powerset contrastive loss in linear time. Built upon the CLIP framework, PowerCLIP integrates textual dependency parsing trees with image region detection outputs to construct structured regionโphrase correspondences. Extensive experiments on zero-shot classification and cross-modal retrieval demonstrate that PowerCLIP consistently outperforms CLIP and state-of-the-art variants, validating its superior compositional generalization, fine-grained alignment capability, and robustness.
๐ Abstract
Contrastive vision-language pre-training frameworks such as CLIP have demonstrated impressive zero-shot performance across a range of vision-language tasks. Recent studies have shown that aligning individual text tokens with specific image patches or regions enhances fine-grained compositional understanding. However, it remains challenging to capture compositional semantics that span multiple image regions. To address this limitation, we propose PowerCLIP, a novel contrastive pre-training framework enhanced by powerset alignment, which exhaustively optimizes region-to-phrase alignments by minimizing the loss defined between powersets of image regions and textual parse trees. Since the naive powerset construction incurs exponential computational cost due to the combinatorial explosion in the number of region subsets, we introduce efficient non-linear aggregators (NLAs) that reduce complexity from O(2^M) to O(M) with respect to the number of regions M, while approximating the exact loss value with arbitrary precision. Our extensive experiments demonstrate that PowerCLIP outperforms state-of-the-art methods in zero-shot classification and retrieval tasks, underscoring the compositionality and robustness of our approach. Our code will be made publicly available.