Task-Level Contrastiveness for Cross-Domain Few-Shot Learning

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-domain few-shot learning suffers from poor model generalization, challenging domain adaptation, and high computational overhead. This paper proposes Task-level Contrastive Learning (TCL), a framework that models the semantic structure of task representations via unsupervised clustering to enable efficient knowledge transfer without requiring domain-specific priors. Its core contributions are: (1) introducing the notion of task-level contrastivity, along with task augmentation strategies and a dedicated task-level contrastive loss; and (2) seamlessly integrating task representation learning, contrastive learning, and meta-learning into a lightweight, plug-and-play architecture. Evaluated on the MetaDataset benchmark, TCL achieves significant improvements over state-of-the-art methods in both classification accuracy and inference efficiency, while introducing no additional model parameters or training complexity.

Technology Category

Application Category

📝 Abstract
Few-shot classification and meta-learning methods typically struggle to generalize across diverse domains, as most approaches focus on a single dataset, failing to transfer knowledge across various seen and unseen domains. Existing solutions often suffer from low accuracy, high computational costs, and rely on restrictive assumptions. In this paper, we introduce the notion of task-level contrastiveness, a novel approach designed to address issues of existing methods. We start by introducing simple ways to define task augmentations, and thereafter define a task-level contrastive loss that encourages unsupervised clustering of task representations. Our method is lightweight and can be easily integrated within existing few-shot/meta-learning algorithms while providing significant benefits. Crucially, it leads to improved generalization and computational efficiency without requiring prior knowledge of task domains. We demonstrate the effectiveness of our approach through different experiments on the MetaDataset benchmark, where it achieves superior performance without additional complexity.
Problem

Research questions and friction points this paper is trying to address.

Improves cross-domain generalization in few-shot learning
Reduces computational costs while maintaining high accuracy
Enhances task representation clustering without domain knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces task-level contrastive loss for clustering
Uses task augmentations to improve generalization
Lightweight integration with existing few-shot algorithms
🔎 Similar Papers
No similar papers found.
K
Kristi Topollai
New York University
Anna Choromanska
Anna Choromanska
New York University
machine learning