RevCD - Reversed Conditional Diffusion for Generalized Zero-Shot Learning

📅 2024-08-31
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generalized zero-shot learning (GZSL) faces challenges in cross-domain knowledge transfer and high computational costs associated with joint visual-semantic distribution modeling. Method: This paper proposes the first reverse-conditional diffusion-based framework for zero-shot learning, enabling unsupervised semantic alignment without requiring paired annotations. It reverses the diffusion process to generate semantic features from image features, leveraging a conditional diffusion model integrated with a multi-head vision Transformer, Hadamard cross-additive embedding, and sinusoidal time encoding to construct a joint temporal-visual embedding space. A novel diffusion alignment loss is further introduced. Contribution/Results: The method achieves state-of-the-art performance on standard benchmarks—including CUB, SUN, and AWA2—with significant improvements in unseen-class accuracy and strong cross-dataset generalization capability.

Technology Category

Application Category

📝 Abstract
In Generalized Zero-Shot Learning (GZSL), we aim to recognize both seen and unseen categories using a model trained only on seen categories. In computer vision, this translates into a classification problem, where knowledge from seen categories is transferred to unseen categories by exploiting the relationships between visual features and available semantic information, such as text corpora or manual annotations. However, learning this joint distribution is costly and requires one-to-one training with corresponding semantic information. We present a reversed conditional Diffusion-based model (RevCD) that mitigates this issue by generating semantic features synthesized from visual inputs by leveraging Diffusion models' conditional mechanisms. Our RevCD model consists of a cross Hadamard-Addition embedding of a sinusoidal time schedule and a multi-headed visual transformer for attention-guided embeddings. The proposed approach introduces three key innovations. First, we reverse the process of generating semantic space based on visual data, introducing a novel loss function that facilitates more efficient knowledge transfer. Second, we apply Diffusion models to zero-shot learning - a novel approach that exploits their strengths in capturing data complexity. Third, we demonstrate our model's performance through a comprehensive cross-dataset evaluation. The complete code will be available on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Recognizing unseen categories using seen category training in GZSL
Generating semantic features from visual inputs via Diffusion models
Improving knowledge transfer efficiency with a novel loss function
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reversed conditional Diffusion for semantic feature generation
Cross Hadamard-Addition embedding with multi-headed transformer
Novel loss function for efficient knowledge transfer
W
William Heyden
Faculty of Science and Technology (REALTEK), Norwegian University of Life Sciences, NMBU, 1430 Ås, Norway
H
Habib Ullah
Faculty of Science and Technology (REALTEK), Norwegian University of Life Sciences, NMBU, 1430 Ås, Norway
M
M. S. Siddiqui
Faculty of Science and Technology (REALTEK), Norwegian University of Life Sciences, NMBU, 1430 Ås, Norway
Fadi Al Machot
Fadi Al Machot
Professor (associate) in Machine Learning, Norwegian University of Life Sciences
Machine LearningNeural-Symbolic LearningActive and Assisted LivingData MiningZero/Few-Shot