Toward Better Generalization in Few-Shot Learning through the Meta-Component Combination

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In few-shot learning, existing metric-based meta-learning approaches suffer from degraded generalization to unseen classes due to over-reliance on deep metrics optimized for seen classes. To address this, we propose a meta-component composition framework that models classifiers as reconfigurable sets of meta-components. During meta-training, orthogonal regularization explicitly decouples these components, enhancing their diversity and functional specificity—thereby enabling effective extraction of task-invariant discriminative substructures. This decoupling mitigates overfitting to seen classes and improves cross-class generalization. Evaluated on standard benchmarks including Mini-ImageNet and Tiered-ImageNet, our method achieves significant improvements over state-of-the-art metric-learning approaches. Empirical results validate the efficacy of both meta-component decoupling and compositional modeling for robust few-shot classification.

Technology Category

Application Category

📝 Abstract
In few-shot learning, classifiers are expected to generalize to unseen classes given only a small number of instances of each new class. One of the popular solutions to few-shot learning is metric-based meta-learning. However, it highly depends on the deep metric learned on seen classes, which may overfit to seen classes and fail to generalize well on unseen classes. To improve the generalization, we explore the substructures of classifiers and propose a novel meta-learning algorithm to learn each classifier as a combination of meta-components. Meta-components are learned across meta-learning episodes on seen classes and disentangled by imposing an orthogonal regularizer to promote its diversity and capture various shared substructures among different classifiers. Extensive experiments on few-shot benchmark tasks show superior performances of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Addresses overfitting in few-shot learning by enhancing classifier generalization
Proposes meta-component combination to capture diverse shared substructures across classifiers
Improves generalization on unseen classes through orthogonal regularization of meta-components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns classifiers as meta-component combinations
Imposes orthogonal regularizer for diversity
Captures shared substructures across classifiers
🔎 Similar Papers
No similar papers found.