🤖 AI Summary
This work addresses convex tri-composite optimization problems by proposing the Stochastic Primal-Dual Three-Operator Splitting algorithm (TOS-SPDHG), the first method to directly embed three-operator splitting into a stochastic primal-dual framework. Furthermore, it introduces Equivariant Regularization-by-Denoising (Equivariant RED), a novel modeling paradigm leveraging pre-trained deep denoisers to integrate deep priors into convex optimization while preserving both generalizability and geometric structure. Theoretically, the algorithm is proven to achieve an $O(1/K)$ ergodic convergence rate. Experimentally, it demonstrates substantial performance gains over classical variational methods and non-learning optimization algorithms on imaging inverse problems—including computed tomography (CT) reconstruction and single-image super-resolution—highlighting its effectiveness in balancing data-driven adaptability with rigorous optimization guarantees.
📝 Abstract
In this work we propose a stochastic primal-dual three-operator splitting algorithm (TOS-SPDHG) for solving a class of convex three-composite optimization problems. Our proposed scheme is a direct three-operator splitting extension of the SPDHG algorithm [Chambolle et al. 2018]. We provide theoretical convergence analysis showing ergodic $O(1/K)$ convergence rate, and demonstrate the effectiveness of our approach in imaging inverse problems. Moreover, we further propose TOS-SPDHG-RED and TOS-SPDHG-eRED which utilizes the regularization-by-denoising (RED) framework to leverage pretrained deep denoising networks as priors.