🤖 AI Summary
Generalized zero-shot learning (GZSL) faces challenges in cross-domain knowledge transfer and high computational costs associated with joint visual-semantic distribution modeling.
Method: This paper proposes the first reverse-conditional diffusion-based framework for zero-shot learning, enabling unsupervised semantic alignment without requiring paired annotations. It reverses the diffusion process to generate semantic features from image features, leveraging a conditional diffusion model integrated with a multi-head vision Transformer, Hadamard cross-additive embedding, and sinusoidal time encoding to construct a joint temporal-visual embedding space. A novel diffusion alignment loss is further introduced.
Contribution/Results: The method achieves state-of-the-art performance on standard benchmarks—including CUB, SUN, and AWA2—with significant improvements in unseen-class accuracy and strong cross-dataset generalization capability.
📝 Abstract
In Generalized Zero-Shot Learning (GZSL), we aim to recognize both seen and unseen categories using a model trained only on seen categories. In computer vision, this translates into a classification problem, where knowledge from seen categories is transferred to unseen categories by exploiting the relationships between visual features and available semantic information, such as text corpora or manual annotations. However, learning this joint distribution is costly and requires one-to-one training with corresponding semantic information. We present a reversed conditional Diffusion-based model (RevCD) that mitigates this issue by generating semantic features synthesized from visual inputs by leveraging Diffusion models' conditional mechanisms. Our RevCD model consists of a cross Hadamard-Addition embedding of a sinusoidal time schedule and a multi-headed visual transformer for attention-guided embeddings. The proposed approach introduces three key innovations. First, we reverse the process of generating semantic space based on visual data, introducing a novel loss function that facilitates more efficient knowledge transfer. Second, we apply Diffusion models to zero-shot learning - a novel approach that exploits their strengths in capturing data complexity. Third, we demonstrate our model's performance through a comprehensive cross-dataset evaluation. The complete code will be available on GitHub.