Unbiased and Biased Variance-Reduced Forward-Reflected-Backward Splitting Methods for Stochastic Composite Inclusions

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses stochastic composite inclusion problems that may be non-monotone, particularly tackling the challenge of lacking effective variance reduction methods under biased estimators. The authors propose a unified framework that, for the first time, introduces biased variance-reduced estimators to inclusion and fixed-point problems, designing a new class of estimators tailored for the forward-reflected-backward splitting algorithm and providing a unified analysis covering both unbiased and biased settings. By integrating variance reduction techniques such as loopless-SVRG and SAGA, the method achieves an expected residual convergence rate of O(1/k) and almost sure convergence in the unbiased case, with oracle complexities of O(n^{2/3}ε^{-2}) and O(ε^{-10/3}), respectively. In the biased setting, the corresponding complexities are O(n^{3/4}ε^{-2}) and O(ε^{-5}). The approach is validated through applications in AUC optimization and policy evaluation in reinforcement learning.

Technology Category

Application Category

📝 Abstract
This paper develops new variance-reduction techniques for the forward-reflected-backward splitting (FRBS) method to solve a class of possibly nonmonotone stochastic composite inclusions. Unlike unbiased estimators such as mini-batching, developing stochastic biased variants faces a fundamental technical challenge and has not been utilized before for inclusions and fixed-point problems. We fill this gap by designing a new framework that can handle both unbiased and biased estimators. Our main idea is to construct stochastic variance-reduced estimators for the forward-reflected direction and use them to perform iterate updates. First, we propose a class of unbiased variance-reduced estimators and show that increasing mini-batch SGD, loopless-SVRG, and SAGA estimators fall within this class. For these unbiased estimators, we establish a $\mathcal{O}(1/k)$ best-iterate convergence rate for the expected squared residual norm, together with almost-sure convergence of the iterate sequence to a solution. Consequently, we prove that the best oracle complexities for the $n$-finite-sum and expectation settings are $\mathcal{O}(n^{2/3}ε^{-2})$ and $\mathcal{O}(ε^{-10/3})$, respectively, when employing loopless-SVRG or SAGA, where $ε$ is a desired accuracy. Second, we introduce a new class of biased variance-reduced estimators for the forward-reflected direction, which includes SARAH, Hybrid SGD, and Hybrid SVRG as special instances. While the convergence rates remain valid for these biased estimators, the resulting oracle complexities are $\mathcal{O}(n^{3/4}ε^{-2})$ and $\mathcal{O}(ε^{-5})$ for the $n$-finite-sum and expectation settings, respectively. Finally, we conduct two numerical experiments on AUC optimization for imbalanced classification and policy evaluation in reinforcement learning.
Problem

Research questions and friction points this paper is trying to address.

stochastic composite inclusions
variance reduction
forward-reflected-backward splitting
biased estimators
nonmonotone operators
Innovation

Methods, ideas, or system contributions that make the work stand out.

variance reduction
forward-reflected-backward splitting
biased estimators
stochastic composite inclusions
oracle complexity
🔎 Similar Papers
No similar papers found.
Quoc Tran-Dinh
Quoc Tran-Dinh
Department of Statistics and Operations Research, UNC
convex optimizationnonlinear programmingoptimization for machine learning
N
Nghia Nguyen-Trung
Department of Statistics and Operations Research, The University of North Carolina at Chapel Hill