PowerCLIP: Powerset Alignment for Contrastive Pre-Training

๐Ÿ“… 2025-11-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing contrastive vision-language pretraining models (e.g., CLIP) struggle to model fine-grained compositional semantics across image regions. To address this, we propose PowerCLIP, which introduces a powerset alignment mechanism to enable explicit, fine-grained semantic matching between image regions and text phrases. To circumvent the exponential computational complexity of exact powerset computation, we design a Non-Linear Aggregator (NLA) that approximates the precise powerset contrastive loss in linear time. Built upon the CLIP framework, PowerCLIP integrates textual dependency parsing trees with image region detection outputs to construct structured regionโ€“phrase correspondences. Extensive experiments on zero-shot classification and cross-modal retrieval demonstrate that PowerCLIP consistently outperforms CLIP and state-of-the-art variants, validating its superior compositional generalization, fine-grained alignment capability, and robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Contrastive vision-language pre-training frameworks such as CLIP have demonstrated impressive zero-shot performance across a range of vision-language tasks. Recent studies have shown that aligning individual text tokens with specific image patches or regions enhances fine-grained compositional understanding. However, it remains challenging to capture compositional semantics that span multiple image regions. To address this limitation, we propose PowerCLIP, a novel contrastive pre-training framework enhanced by powerset alignment, which exhaustively optimizes region-to-phrase alignments by minimizing the loss defined between powersets of image regions and textual parse trees. Since the naive powerset construction incurs exponential computational cost due to the combinatorial explosion in the number of region subsets, we introduce efficient non-linear aggregators (NLAs) that reduce complexity from O(2^M) to O(M) with respect to the number of regions M, while approximating the exact loss value with arbitrary precision. Our extensive experiments demonstrate that PowerCLIP outperforms state-of-the-art methods in zero-shot classification and retrieval tasks, underscoring the compositionality and robustness of our approach. Our code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Capturing compositional semantics spanning multiple image regions
Addressing exponential computational cost in powerset alignment
Enhancing fine-grained vision-language compositional understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Powerset alignment optimizes region-to-phrase correspondences
Non-linear aggregators reduce complexity from exponential to linear
Framework enhances compositional understanding in vision-language tasks
๐Ÿ”Ž Similar Papers
No similar papers found.