WARM-CAT: : Warm-Started Test-Time Comprehensive Knowledge Accumulation for Compositional Zero-Shot Learning

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation in compositional zero-shot learning caused by distributional shifts in the label space during testing. To mitigate this issue, the authors propose a test-time framework that accumulates multimodal knowledge and dynamically updates prototypes through a warm-start queue initialization, adaptive weight adjustment, and a priority queue mechanism. This approach enables unsupervised fusion of visual and textual data to align cross-modal representations and refine prototype embeddings. The contributions include the construction of a new benchmark, C-Fashion, improvements to the MIT-States dataset, and state-of-the-art results across four benchmarks under both closed-world and open-world evaluation settings.

Technology Category

Application Category

📝 Abstract
Compositional Zero-Shot Learning (CZSL) aims to recognize novel attribute-object compositions based on the knowledge learned from seen ones. Existing methods suffer from performance degradation caused by the distribution shift of label space at test time, which stems from the inclusion of unseen compositions recombined from attributes and objects. To overcome the challenge, we propose a novel approach that accumulates comprehensive knowledge in both textual and visual modalities from unsupervised data to update multimodal prototypes at test time. Building on this, we further design an adaptive update weight to control the degree of prototype adjustment, enabling the model to flexibly adapt to distribution shift during testing. Moreover, a dynamic priority queue is introduced that stores high-confidence images to acquire visual prototypes from historical images for inference. Since the model tends to favor compositions already stored in the queue during testing, we warm-start the queue by initializing it with training images for visual prototypes of seen compositions and generating unseen visual prototypes using the mapping learned between seen and unseen textual prototypes. Considering the semantic consistency of multimodal knowledge, we align textual and visual prototypes by multimodal collaborative representation learning. To provide a more reliable evaluation for CZSL, we introduce a new benchmark dataset, C-Fashion, and refine the widely used but noisy MIT-States dataset. Extensive experiments indicate that our approach achieves state-of-the-art performance on four benchmark datasets under both closed-world and open-world settings. The source code and datasets are available at https://github.com/xud-yan/WARM-CAT .
Problem

Research questions and friction points this paper is trying to address.

Compositional Zero-Shot Learning
distribution shift
unseen compositions
label space
test-time adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compositional Zero-Shot Learning
Test-Time Adaptation
Multimodal Prototype Alignment
Dynamic Priority Queue
Warm-Started Initialization
🔎 Similar Papers
No similar papers found.