Unsupervised Meta-Learning via In-Context Learning

📅 2024-05-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited feature generalization capability in unsupervised meta-learning. We propose a novel paradigm based on contextual sequence modeling: each meta-task is formulated as an image sequence prediction problem, where a Transformer encoder implicitly models task context from support-set images and directly regresses representations for query images. To our knowledge, this is the first integration of in-context learning into unsupervised meta-learning. We further introduce contrastive data mixing and strong augmentations to automatically generate diverse pseudo-tasks, enhancing generalization to unseen tasks. Our method achieves new state-of-the-art performance across standard benchmarks, consistently outperforming existing unsupervised meta-learning approaches. Notably, it matches or exceeds the performance of supervised and self-supervised counterparts, empirically validating our core design principle—“generalization over memorization.”

Technology Category

Application Category

📝 Abstract
Unsupervised meta-learning aims to learn feature representations from unsupervised datasets that can transfer to downstream tasks with limited labeled data. In this paper, we propose a novel approach to unsupervised meta-learning that leverages the generalization abilities of in-context learning observed in transformer architectures. Our method reframes meta-learning as a sequence modeling problem, enabling the transformer encoder to learn task context from support images and utilize it to predict query images. At the core of our approach lies the creation of diverse tasks generated using a combination of data augmentations and a mixing strategy that challenges the model during training while fostering generalization to unseen tasks at test time. Experimental results on benchmark datasets showcase the superiority of our approach over existing unsupervised meta-learning baselines, establishing it as the new state-of-the-art. Remarkably, our method achieves competitive results with supervised and self-supervised approaches, underscoring its efficacy in leveraging generalization over memorization.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised meta-learning for feature representation
Leveraging in-context learning generalization
Enhancing task generalization via data augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised meta-learning via transformers
Sequence modeling for task context
Data augmentation for diverse tasks
🔎 Similar Papers
No similar papers found.
A
Anna Vettoruzzo
Halmstad University, Sweden
L
Lorenzo Braccaioli
University of Trento, Italy
Joaquin Vanschoren
Joaquin Vanschoren
Eindhoven University of Technology; Google Deepmind (Visiting)
Artificial IntelligenceMachine Learning
M
Marlena Nowaczyk
Halmstad University, Sweden