🤖 AI Summary
This work addresses the minimization of convex, $L$-smooth functions over unbounded domains in separable real Hilbert spaces, with emphasis on the stable computation of the minimum-norm solution. We propose a regularized stochastic gradient descent (reg-SGD) algorithm incorporating time-varying decaying Tikhonov regularization. For the first time without assuming boundedness of iterates or parameters, we establish its strong convergence to the minimum-norm solution and derive optimal convergence rates. We uncover the dynamic stabilization mechanism induced by decaying regularization on SGD trajectories and formulate a joint optimization criterion for step sizes and regularization parameters. The theoretical framework provides a novel iterative design paradigm for ill-posed inverse problems, balancing stability and accuracy. Numerical experiments on image reconstruction and ODE inverse problems demonstrate substantial improvements in convergence behavior, robustness to noise, and solution accuracy.
📝 Abstract
The present article studies the minimization of convex, L-smooth functions defined on a separable real Hilbert space. We analyze regularized stochastic gradient descent (reg-SGD), a variant of stochastic gradient descent that uses a Tikhonov regularization with time-dependent, vanishing regularization parameter. We prove strong convergence of reg-SGD to the minimum-norm solution of the original problem without additional boundedness assumptions. Moreover, we quantify the rate of convergence and optimize the interplay between step-sizes and regularization decay. Our analysis reveals how vanishing Tikhonov regularization controls the flow of SGD and yields stable learning dynamics, offering new insights into the design of iterative algorithms for convex problems, including those that arise in ill-posed inverse problems. We validate our theoretical findings through numerical experiments on image reconstruction and ODE-based inverse problems.