🤖 AI Summary
To address the challenges of slow convergence and poor adaptation in few-shot tasks within meta-learning, this paper proposes a meta-adaptation method based on learnable mirror descent. The core contribution is a neural-network-parameterized distance-generating function that induces a nonlinear mirror map, explicitly capturing the complex geometric structure—such as non-quadratic curvature—of the loss landscape and thereby overcoming the limitations of standard Euclidean metrics. While preserving the theoretical convergence rate of mirror descent (O(ε⁻²)), the method significantly improves adaptation efficiency in few-shot settings. Empirical results demonstrate that it achieves accuracy comparable to standard gradient-based methods using only a minimal number of optimization steps. Extensive experiments on large-scale meta-learning models validate its computational efficiency and scalability.
📝 Abstract
Utilizing task-invariant knowledge acquired from related tasks as prior information, meta-learning offers a principled approach to learning a new task with limited data records. Sample-efficient adaptation of this prior information is a major challenge facing meta-learning, and plays an important role because it facilitates training the sought task-specific model with just a few optimization steps. Past works deal with this challenge through preconditioning that speeds up convergence of the per-task training. Though effective in representing locally quadratic loss curvatures, simple linear preconditioning can be hardly potent with complex loss geometries. Instead of relying on a quadratic distance metric, the present contribution copes with complex loss metrics by learning a versatile distance-generating function, which induces a nonlinear mirror map to effectively capture and optimize a wide range of loss geometries. With suitable parameterization, this generating function is effected by an expressive neural network that is provably a valid distance. Analytical results establish convergence of not only the proposed method, but also all meta-learning approaches based on preconditioning. To attain gradient norm less than $ε$, the convergence rate of $mathcal{O}(ε^{-2})$ is on par with standard gradient-based meta-learning methods. Numerical tests on few-shot learning datasets demonstrate the superior empirical performance of the novel algorithm, as well as its rapid per-task convergence, which markedly reduces the number of adaptation steps, hence also accommodating large-scale meta-learning models.