🤖 AI Summary
In few-shot learning, existing metric-based meta-learning approaches suffer from degraded generalization to unseen classes due to over-reliance on deep metrics optimized for seen classes. To address this, we propose a meta-component composition framework that models classifiers as reconfigurable sets of meta-components. During meta-training, orthogonal regularization explicitly decouples these components, enhancing their diversity and functional specificity—thereby enabling effective extraction of task-invariant discriminative substructures. This decoupling mitigates overfitting to seen classes and improves cross-class generalization. Evaluated on standard benchmarks including Mini-ImageNet and Tiered-ImageNet, our method achieves significant improvements over state-of-the-art metric-learning approaches. Empirical results validate the efficacy of both meta-component decoupling and compositional modeling for robust few-shot classification.
📝 Abstract
In few-shot learning, classifiers are expected to generalize to unseen classes given only a small number of instances of each new class. One of the popular solutions to few-shot learning is metric-based meta-learning. However, it highly depends on the deep metric learned on seen classes, which may overfit to seen classes and fail to generalize well on unseen classes. To improve the generalization, we explore the substructures of classifiers and propose a novel meta-learning algorithm to learn each classifier as a combination of meta-components. Meta-components are learned across meta-learning episodes on seen classes and disentangled by imposing an orthogonal regularizer to promote its diversity and capture various shared substructures among different classifiers. Extensive experiments on few-shot benchmark tasks show superior performances of the proposed method.