🤖 AI Summary
Cross-domain few-shot learning suffers from poor model generalization, challenging domain adaptation, and high computational overhead. This paper proposes Task-level Contrastive Learning (TCL), a framework that models the semantic structure of task representations via unsupervised clustering to enable efficient knowledge transfer without requiring domain-specific priors. Its core contributions are: (1) introducing the notion of task-level contrastivity, along with task augmentation strategies and a dedicated task-level contrastive loss; and (2) seamlessly integrating task representation learning, contrastive learning, and meta-learning into a lightweight, plug-and-play architecture. Evaluated on the MetaDataset benchmark, TCL achieves significant improvements over state-of-the-art methods in both classification accuracy and inference efficiency, while introducing no additional model parameters or training complexity.
📝 Abstract
Few-shot classification and meta-learning methods typically struggle to generalize across diverse domains, as most approaches focus on a single dataset, failing to transfer knowledge across various seen and unseen domains. Existing solutions often suffer from low accuracy, high computational costs, and rely on restrictive assumptions. In this paper, we introduce the notion of task-level contrastiveness, a novel approach designed to address issues of existing methods. We start by introducing simple ways to define task augmentations, and thereafter define a task-level contrastive loss that encourages unsupervised clustering of task representations. Our method is lightweight and can be easily integrated within existing few-shot/meta-learning algorithms while providing significant benefits. Crucially, it leads to improved generalization and computational efficiency without requiring prior knowledge of task domains. We demonstrate the effectiveness of our approach through different experiments on the MetaDataset benchmark, where it achieves superior performance without additional complexity.