A stochastic gradient method for trilevel optimization

📅 2025-05-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses unconstrained trilevel optimization problems, aiming to develop efficient and theoretically grounded stochastic optimization methods for emerging trilevel machine learning formulations—such as hyperparameter adversarial tuning. To handle multiple sources of inexactness—including approximate solutions to the middle- and lower-level subproblems, errors in trilevel adjoint gradient computation, and noisy estimates of higher-order derivatives (Hessians, Jacobians, and third-order tensors)—we propose the first provably convergent stochastic gradient descent algorithm for trilevel optimization. Our key innovation is a unified error propagation analysis framework that rigorously characterizes the interplay among subproblem solution errors, formula truncation errors, and stochastic gradient noise. This yields the first convergence theory for trilevel adjoint gradients under realistic inexactness assumptions. Empirical evaluation on synthetic benchmarks and hyperparameter adversarial tuning tasks demonstrates substantial improvements in both convergence stability and computational efficiency.

Technology Category

Application Category

📝 Abstract
With the success that the field of bilevel optimization has seen in recent years, similar methodologies have started being applied to solving more difficult applications that arise in trilevel optimization. At the helm of these applications are new machine learning formulations that have been proposed in the trilevel context and, as a result, efficient and theoretically sound stochastic methods are required. In this work, we propose the first-ever stochastic gradient descent method for solving unconstrained trilevel optimization problems and provide a convergence theory that covers all forms of inexactness of the trilevel adjoint gradient, such as the inexact solutions of the middle-level and lower-level problems, inexact computation of the trilevel adjoint formula, and noisy estimates of the gradients, Hessians, Jacobians, and tensors of third-order derivatives involved. We also demonstrate the promise of our approach by providing numerical results on both synthetic trilevel problems and trilevel formulations for hyperparameter adversarial tuning.
Problem

Research questions and friction points this paper is trying to address.

Develop stochastic gradient method for trilevel optimization
Address inexactness in trilevel adjoint gradient computation
Apply method to hyperparameter adversarial tuning problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic gradient descent for trilevel optimization
Covers inexact adjoint gradient computations
Validated on synthetic and hyperparameter problems
🔎 Similar Papers
No similar papers found.
T
Tommaso Giovannelli
University of Cincinnati
G
Griffin Dean Kent
Lehigh University
Luis Nunes Vicente
Luis Nunes Vicente
Lehigh University
OptimizationApplied MathematicsOperations Research