🤖 AI Summary
In compositional zero-shot learning (CZSL), attribute-object compositions induce feature coupling and hinder generalization due to semantic entanglement between attributes and objects.
Method: We propose a cross-compositional feature disentanglement framework: (1) a composition graph models shared semantic relationships between attributes and objects, enforcing graph-guided disentanglement constraints; (2) lightweight, co-adaptive language and vision adapters (L-Adapter/V-Adapter) are inserted into a frozen CLIP backbone for efficient cross-modal disentanglement; (3) a feature disentanglement regularization term and a zero-shot compositional generalization training strategy are introduced.
Contribution/Results: This work establishes the first cross-compositional disentanglement paradigm tailored for CZSL. It achieves state-of-the-art performance on three standard benchmarks. Ablation studies validate the efficacy of each component. Code and data are publicly released.
📝 Abstract
Disentanglement of visual features of primitives (i.e., attributes and objects) has shown exceptional results in Compositional Zero-shot Learning (CZSL). However, due to the feature divergence of an attribute (resp. object) when combined with different objects (resp. attributes), it is challenging to learn disentangled primitive features that are general across different compositions. To this end, we propose the solution of cross-composition feature disentanglement, which takes multiple primitive-sharing compositions as inputs and constrains the disentangled primitive features to be general across these compositions. More specifically, we leverage a compositional graph to define the overall primitive-sharing relationships between compositions, and build a task-specific architecture upon the recently successful large pre-trained vision-language model (VLM) CLIP, with dual cross-composition disentangling adapters (called L-Adapter and V-Adapter) inserted into CLIP's frozen text and image encoders, respectively. Evaluation on three popular CZSL benchmarks shows that our proposed solution significantly improves the performance of CZSL, and its components have been verified by solid ablation studies. Our code and data are available at:https://github.com/zhurunkai/DCDA.