Mirror Descent Using the Tempesta Generalized Multi-parametric Logarithms

๐Ÿ“… 2025-06-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited geometric and distributional adaptability of Mirror Descent (MD) in constrained optimization. We propose a novel family of MD algorithms grounded in the multi-parameter generalized logarithm of Tempesta. For the first time, this multi-parameter deformed logarithm serves as a mirror map, inducing an infinite-dimensional generalized trace-type entropy and its associated Bregman divergence, along with the corresponding MD update rule. The framework enables end-to-end differentiable hyperparameter learning and unifies mirror-based and mirror-free MD paradigms. Theoretically general and practically flexible, it subsumes classical MD and its major variants while significantly improving convergence speed and robustness in challenging settingsโ€”such as non-Euclidean geometries and heavy-tailed distributions. Our core contribution lies in intrinsically integrating geometric awareness, distributional adaptivity, and hyperparameter learnability within a single coherent framework.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we develop a wide class Mirror Descent (MD) algorithms, which play a key role in machine learning. For this purpose we formulated the constrained optimization problem, in which we exploits the Bregman divergence with the Tempesta multi-parametric deformation logarithm as a link function. This link function called also mirror function defines the mapping between the primal and dual spaces and is associated with a very-wide (in fact, theoretically infinite) class of generalized trace-form entropies. In order to derive novel MD updates, we estimate generalized exponential function, which closely approximates the inverse of the multi-parametric Tempesta generalized logarithm. The shape and properties of the Tempesta logarithm and its inverse-deformed exponential functions can be tuned by several hyperparameters. By learning these hyperparameters, we can adapt to distribution or geometry of training data, and we can adjust them to achieve desired properties of MD algorithms. The concept of applying multi-parametric logarithms allow us to generate a new wide and flexible family of MD and mirror-less MD updates.
Problem

Research questions and friction points this paper is trying to address.

Develop Mirror Descent algorithms using Tempesta logarithms
Optimize Bregman divergence with multi-parametric deformation
Adapt hyperparameters for data distribution and algorithm properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Tempesta multi-parametric logarithm as link function
Estimates inverse generalized exponential for MD updates
Learns hyperparameters to adapt data distribution
๐Ÿ”Ž Similar Papers
No similar papers found.