Mutually Causal Semantic Distillation Network for Zero-Shot Learning

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of unidirectional weakly supervised attention mechanisms in zero-shot learning, which struggle to adequately model the causal semantic relationships between visual features and attributes. To overcome this, the authors propose a mutually causal semantic distillation framework that introduces, for the first time, a bidirectional causal attention subnetwork. This subnetwork simultaneously guides visual feature learning through attributes and reconstructs attribute representations using visual information, with both directions jointly optimized under a semantic distillation loss. The approach enables deep cross-modal semantic mutual teaching and alignment, achieving state-of-the-art performance across four standard benchmarks—CUB, SUN, AWA2, and FLO—and significantly outperforming existing strong baselines.

Technology Category

Application Category

📝 Abstract
Zero-shot learning (ZSL) aims to recognize the unseen classes in the open-world guided by the side-information (e.g., attributes). Its key task is how to infer the latent semantic knowledge between visual and attribute features on seen classes, and thus conducting a desirable semantic knowledge transfer from seen classes to unseen ones. Prior works simply utilize unidirectional attention within a weakly-supervised manner to learn the spurious and limited latent semantic representations, which fail to effectively discover the intrinsic semantic knowledge (e.g., attribute semantic) between visual and attribute features. To solve the above challenges, we propose a mutually causal semantic distillation network (termed MSDN++) to distill the intrinsic and sufficient semantic representations for ZSL. MSDN++ consists of an attribute$\rightarrow$visual causal attention sub-net that learns attribute-based visual features, and a visual$\rightarrow$attribute causal attention sub-net that learns visual-based attribute features. The causal attentions encourages the two sub-nets to learn causal vision-attribute associations for representing reliable features with causal visual/attribute learning. With the guidance of semantic distillation loss, the two mutual attention sub-nets learn collaboratively and teach each other throughout the training process. Extensive experiments on three widely-used benchmark datasets (e.g., CUB, SUN, AWA2, and FLO) show that our MSDN++ yields significant improvements over the strong baselines, leading to new state-of-the-art performances.
Problem

Research questions and friction points this paper is trying to address.

Zero-shot learning
Semantic knowledge transfer
Vision-attribute association
Latent semantic representation
Unseen class recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

mutually causal attention
semantic distillation
zero-shot learning
vision-attribute association
causal representation learning
🔎 Similar Papers
No similar papers found.