Graph-guided Cross-composition Feature Disentanglement for Compositional Zero-shot Learning

📅 2024-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In compositional zero-shot learning (CZSL), attribute-object compositions induce feature coupling and hinder generalization due to semantic entanglement between attributes and objects. Method: We propose a cross-compositional feature disentanglement framework: (1) a composition graph models shared semantic relationships between attributes and objects, enforcing graph-guided disentanglement constraints; (2) lightweight, co-adaptive language and vision adapters (L-Adapter/V-Adapter) are inserted into a frozen CLIP backbone for efficient cross-modal disentanglement; (3) a feature disentanglement regularization term and a zero-shot compositional generalization training strategy are introduced. Contribution/Results: This work establishes the first cross-compositional disentanglement paradigm tailored for CZSL. It achieves state-of-the-art performance on three standard benchmarks. Ablation studies validate the efficacy of each component. Code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Disentanglement of visual features of primitives (i.e., attributes and objects) has shown exceptional results in Compositional Zero-shot Learning (CZSL). However, due to the feature divergence of an attribute (resp. object) when combined with different objects (resp. attributes), it is challenging to learn disentangled primitive features that are general across different compositions. To this end, we propose the solution of cross-composition feature disentanglement, which takes multiple primitive-sharing compositions as inputs and constrains the disentangled primitive features to be general across these compositions. More specifically, we leverage a compositional graph to define the overall primitive-sharing relationships between compositions, and build a task-specific architecture upon the recently successful large pre-trained vision-language model (VLM) CLIP, with dual cross-composition disentangling adapters (called L-Adapter and V-Adapter) inserted into CLIP's frozen text and image encoders, respectively. Evaluation on three popular CZSL benchmarks shows that our proposed solution significantly improves the performance of CZSL, and its components have been verified by solid ablation studies. Our code and data are available at:https://github.com/zhurunkai/DCDA.
Problem

Research questions and friction points this paper is trying to address.

Disentangling visual features of primitives across compositions
Addressing feature divergence in attributes and objects combinations
Enhancing generalizability of primitive features in zero-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-composition feature disentanglement for generalization
Graph-guided primitive-sharing composition relationships
Dual adapters in CLIP for text and image
🔎 Similar Papers
No similar papers found.