Learnable Loss Geometries with Mirror Descent for Scalable and Convergent Meta-Learning

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of slow convergence and poor adaptation in few-shot tasks within meta-learning, this paper proposes a meta-adaptation method based on learnable mirror descent. The core contribution is a neural-network-parameterized distance-generating function that induces a nonlinear mirror map, explicitly capturing the complex geometric structure—such as non-quadratic curvature—of the loss landscape and thereby overcoming the limitations of standard Euclidean metrics. While preserving the theoretical convergence rate of mirror descent (O(ε⁻²)), the method significantly improves adaptation efficiency in few-shot settings. Empirical results demonstrate that it achieves accuracy comparable to standard gradient-based methods using only a minimal number of optimization steps. Extensive experiments on large-scale meta-learning models validate its computational efficiency and scalability.

Technology Category

Application Category

📝 Abstract
Utilizing task-invariant knowledge acquired from related tasks as prior information, meta-learning offers a principled approach to learning a new task with limited data records. Sample-efficient adaptation of this prior information is a major challenge facing meta-learning, and plays an important role because it facilitates training the sought task-specific model with just a few optimization steps. Past works deal with this challenge through preconditioning that speeds up convergence of the per-task training. Though effective in representing locally quadratic loss curvatures, simple linear preconditioning can be hardly potent with complex loss geometries. Instead of relying on a quadratic distance metric, the present contribution copes with complex loss metrics by learning a versatile distance-generating function, which induces a nonlinear mirror map to effectively capture and optimize a wide range of loss geometries. With suitable parameterization, this generating function is effected by an expressive neural network that is provably a valid distance. Analytical results establish convergence of not only the proposed method, but also all meta-learning approaches based on preconditioning. To attain gradient norm less than $ε$, the convergence rate of $mathcal{O}(ε^{-2})$ is on par with standard gradient-based meta-learning methods. Numerical tests on few-shot learning datasets demonstrate the superior empirical performance of the novel algorithm, as well as its rapid per-task convergence, which markedly reduces the number of adaptation steps, hence also accommodating large-scale meta-learning models.
Problem

Research questions and friction points this paper is trying to address.

Addressing complex loss geometries in meta-learning beyond simple quadratic approximations
Learning versatile distance-generating functions to optimize diverse loss landscapes effectively
Reducing adaptation steps for scalable meta-learning with provable convergence guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns versatile distance-generating function via neural network
Uses nonlinear mirror map to capture complex loss geometries
Ensures provably valid distance and rapid per-task convergence
🔎 Similar Papers
No similar papers found.
Yilang Zhang
Yilang Zhang
University of Minnesota
large language modelsmachine learningoptimization
Bingcong Li
Bingcong Li
ETH Zurich
optimizationLLMsfine-tuning
G
Georgios B. Giannakis
Dept. of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA