🤖 AI Summary
In domain-adaptive zero-shot learning (DAZSL), existing methods struggle to jointly achieve cross-domain transfer and cross-category generalization—especially under high annotation costs and scarce target-domain data—largely failing to harness the semantic generalization capacity of vision-language models (e.g., CLIP), resulting in inefficient knowledge transfer and degraded cross-modal alignment during fine-tuning. To address this, we introduce CLIP into the DAZSL framework for the first time, proposing a semantic relational structure loss and a cross-modal alignment preservation strategy. These jointly model the semantic topology among categories while enforcing consistency between visual and textual embedding spaces. Our approach achieves significant improvements over state-of-the-art methods on the I2AwA and I2WebV benchmarks, demonstrating superior effectiveness and robustness in joint generalization to both unseen classes and the target domain.
📝 Abstract
The high cost of data annotation has spurred research on training deep learning models in data-limited scenarios. Existing paradigms, however, fail to balance cross-domain transfer and cross-category generalization, giving rise to the demand for Domain-Adaptive Zero-Shot Learning (DAZSL). Although vision-language models (e.g., CLIP) have inherent advantages in the DAZSL field, current studies do not fully exploit their potential. Applying CLIP to DAZSL faces two core challenges: inefficient cross-category knowledge transfer due to the lack of semantic relation guidance, and degraded cross-modal alignment during target domain fine-tuning. To address these issues, we propose a Semantic Relation-Enhanced CLIP (SRE-CLIP) Adapter framework, integrating a Semantic Relation Structure Loss and a Cross-Modal Alignment Retention Strategy. As the first CLIP-based DAZSL method, SRE-CLIP achieves state-of-the-art performance on the I2AwA and I2WebV benchmarks, significantly outperforming existing approaches.