STIMULUS: Achieving Fast Convergence and Low Sample Complexity in Stochastic Multi-Objective Learning

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address slow convergence and high sample complexity in stochastic multi-objective optimization, this paper proposes STIMULUS—a path-integral-based recursive gradient estimation framework. We further develop momentum-accelerated STIMULUS-M and adaptive-batch variants STIMULUS+/STIMULUS-M+. By avoiding full-gradient computation and integrating recursive updates, momentum mechanisms, and dynamic sampling, our methods achieve optimal sample complexities of $O(n + sqrt{n}/varepsilon)$ for non-convex objectives and $O(n + sqrt{n} ln(mu/varepsilon))$ for $mu$-strongly convex ones, with convergence rates of $O(1/T)$ and $O(exp{-mu T})$, respectively—substantially improving upon state-of-the-art algorithms. The theoretical analysis is rigorous, ensuring both computational efficiency and broad applicability across problem settings.

Technology Category

Application Category

📝 Abstract
Recently, multi-objective optimization (MOO) has gained attention for its broad applications in ML, operations research, and engineering. However, MOO algorithm design remains in its infancy and many existing MOO methods suffer from unsatisfactory convergence rate and sample complexity performance. To address this challenge, in this paper, we propose an algorithm called STIMULUS( stochastic path-integrated multi-gradient recursive eulstimator), a new and robust approach for solving MOO problems. Different from the traditional methods, STIMULUS introduces a simple yet powerful recursive framework for updating stochastic gradient estimates to improve convergence performance with low sample complexity. In addition, we introduce an enhanced version of STIMULUS, termed STIMULUS-M, which incorporates a momentum term to further expedite convergence. We establish $O(1/T)$ convergence rates of the proposed methods for non-convex settings and $O (exp{-μT})$ for strongly convex settings, where $T$ is the total number of iteration rounds. Additionally, we achieve the state-of-the-art $O left(n+sqrt{n}ε^{-1} ight)$ sample complexities for non-convex settings and $Oleft(n+ sqrt{n} ln ({μ/ε}) ight)$ for strongly convex settings, where $ε>0$ is a desired stationarity error. Moreover, to alleviate the periodic full gradient evaluation requirement in STIMULUS and STIMULUS-M, we further propose enhanced versions with adaptive batching called STIMULUS+/ STIMULUS-M+ and provide their theoretical analysis.
Problem

Research questions and friction points this paper is trying to address.

Improving convergence rate in multi-objective optimization
Reducing sample complexity in stochastic MOO methods
Eliminating periodic full gradient evaluation requirement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recursive stochastic gradient estimates for MOO
Momentum-enhanced STIMULUS-M for faster convergence
Adaptive batching in STIMULUS+ to reduce gradient evaluations
🔎 Similar Papers
No similar papers found.