A Hybrid Stochastic Gradient Tracking Method for Distributed Online Optimization Over Time-Varying Directed Networks

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
We address distributed online optimization over time-varying directed networks, where existing methods often rely on bounded gradient assumptions and struggle with variance and bias induced by stochastic gradients. We propose TV-HSGT, the first algorithm to unify hybrid stochastic gradient tracking, recursive gradient estimation, and variance reduction within a time-varying directed graph framework. A novel row-column randomized communication mechanism is introduced, eliminating the need for Perron vector estimation or knowledge of node out-degrees. Theoretically, TV-HSGT achieves a dynamic regret bound of $O(sqrt{T(1+V_T)})$, improving upon state-of-the-art guarantees for comparable settings. Empirically, on dynamic resource-constrained logistic regression tasks, TV-HSGT demonstrates significantly faster convergence and enhanced stability compared to baseline methods.

Technology Category

Application Category

📝 Abstract
With the increasing scale and dynamics of data, distributed online optimization has become essential for real-time decision-making in various applications. However, existing algorithms often rely on bounded gradient assumptions and overlook the impact of stochastic gradients, especially in time-varying directed networks. This study proposes a novel Time-Varying Hybrid Stochastic Gradient Tracking algorithm named TV-HSGT, based on hybrid stochastic gradient tracking and variance reduction mechanisms. Specifically, TV-HSGT integrates row-stochastic and column-stochastic communication schemes over time-varying digraphs, eliminating the need for Perron vector estimation or out-degree information. By combining current and recursive stochastic gradients, it effectively reduces gradient variance while accurately tracking global descent directions. Theoretical analysis demonstrates that TV-HSGT can achieve improved bounds on dynamic regret without assuming gradient boundedness. Experimental results on logistic regression tasks confirm the effectiveness of TV-HSGT in dynamic and resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Distributed online optimization in time-varying directed networks
Overcoming reliance on bounded gradient assumptions
Reducing gradient variance without out-degree information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid stochastic gradient tracking method
Row and column stochastic communication schemes
Variance reduction without gradient boundedness assumptions
🔎 Similar Papers
No similar papers found.