🤖 AI Summary
To address the poor robustness and severe performance degradation of few-shot learning (FSL) models under realistic, complex conditions—such as object occlusion, motion blur, small-scale targets, and illumination or background interference—this paper introduces RD-FSL, a novel multi-domain few-shot benchmark explicitly designed for environmental robustness. We propose CRLNet, a conditional representation learning network that explicitly models conditional interactions between support and query sets to enhance intra-class compactness and inter-class separability. CRLNet integrates cross-image feature interaction, multi-domain data augmentation, and end-to-end meta-learning training. Extensive experiments across six benchmark datasets demonstrate consistent superiority over state-of-the-art methods, with average accuracy improvements ranging from 6.83% to 16.98%. Both the RD-FSL benchmark dataset and the source code are publicly released.
📝 Abstract
Few-shot learning (FSL) has recently been extensively utilized to overcome the scarcity of training data in domain-specific visual recognition. In real-world scenarios, environmental factors such as complex backgrounds, varying lighting conditions, long-distance shooting, and moving targets often cause test images to exhibit numerous incomplete targets or noise disruptions. However, current research on evaluation datasets and methodologies has largely ignored the concept of"environmental robustness", which refers to maintaining consistent performance in complex and diverse physical environments. This neglect has led to a notable decline in the performance of FSL models during practical testing compared to their training performance. To bridge this gap, we introduce a new real-world multi-domain few-shot learning (RD-FSL) benchmark, which includes four domains and six evaluation datasets. The test images in this benchmark feature various challenging elements, such as camouflaged objects, small targets, and blurriness. Our evaluation experiments reveal that existing methods struggle to utilize training images effectively to generate accurate feature representations for challenging test images. To address this problem, we propose a novel conditional representation learning network (CRLNet) that integrates the interactions between training and testing images as conditional information in their respective representation processes. The main goal is to reduce intra-class variance or enhance inter-class variance at the feature representation level. Finally, comparative experiments reveal that CRLNet surpasses the current state-of-the-art methods, achieving performance improvements ranging from 6.83% to 16.98% across diverse settings and backbones. The source code and dataset are available at https://github.com/guoqianyu-alberta/Conditional-Representation-Learning.