🤖 AI Summary
To address the high storage costs, challenges in ensuring data quality and domain balance, and privacy/ethical risks associated with large-scale unstructured data training, this paper proposes a meta-learning training paradigm leveraging multiple small-scale, domain-specific datasets to enhance the cross-domain generalization capability of Transformer models in in-context learning. The core innovation lies in constructing a modular, high-quality, and diverse collection of miniature datasets (e.g., Meta-Album) and designing a tailored meta-learning optimization framework. Experiments span controlled evaluation, continual learning, and unsupervised settings. Under fully isolated test domains, the approach matches the performance of large-data training while significantly improving generalization and robustness against catastrophic forgetting. This work offers a novel pathway toward secure, transparent, and sustainable in-context learning modeling.
📝 Abstract
In-context learning enables transformer models to generalize to new tasks based solely on input prompts, without any need for weight updates. However, existing training paradigms typically rely on large, unstructured datasets that are costly to store, difficult to evaluate for quality and balance, and pose privacy and ethical concerns due to the inclusion of sensitive information. Motivated by these limitations and risks, we propose an alternative training strategy where we leverage a collection of multiple, small-scale, and domain-specific datasets. We empirically demonstrate that the increased quality and diversity of such data improve the generalization abilities of in-context learners beyond their training domain, while achieving comparable performance with models trained on a single large-scale dataset. We investigate this paradigm by leveraging meta-learning to train an in-context learner on the Meta-Album collection under several settings. Firstly, we show the performance in a controlled environment, where the test domain is completely excluded from the training knowledge. Secondly, we explore the robustness of these models to forgetting in a continual scenario where the information is accessible for a limited time. Finally, we explore the more challenging unsupervised scenario. Our findings demonstrate that transformers still generalize for in-context prediction when trained on a curated dataset collection while offering advantages in modularity and replaceability.