A Unified Analysis of Stochastic Gradient Descent with Arbitrary Data Permutations and Beyond

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing convergence analyses of stochastic gradient descent (SGD) lack theoretical guarantees under non-i.i.d. data permutations—particularly those exhibiting strong inter-epoch dependencies—leaving critical gaps in understanding shuffle-based optimization. Method: We propose the first unified theoretical framework, introducing a general assumption that quantifies permutation dependence strength, enabling rigorous analysis of four canonical shuffling strategies: random, cyclic, memory-augmented, and deterministic shuffling. We further extend this framework to federated learning, deriving convergence bounds under regularized client participation and device scheduling. Contribution: Our work overcomes the fundamental limitation of prior unified analyses—which fail under dependent permutations—by establishing the first provably correct, broadly applicable convergence theory for shuffle-based SGD. It provides strict, distribution-agnostic convergence guarantees, significantly strengthening the theoretical foundation for permutation-aware optimization and enhancing the design rationale and interpretability of client ordering strategies in federated learning.

Technology Category

Application Category

📝 Abstract
We aim to provide a unified convergence analysis for permutation-based Stochastic Gradient Descent (SGD), where data examples are permuted before each epoch. By examining the relations among permutations, we categorize existing permutation-based SGD algorithms into four categories: Arbitrary Permutations, Independent Permutations (including Random Reshuffling), One Permutation (including Incremental Gradient, Shuffle One and Nice Permutation) and Dependent Permutations (including GraBs Lu et al., 2022; Cooper et al., 2023). Existing unified analyses failed to encompass the Dependent Permutations category due to the inter-epoch dependencies in its permutations. In this work, we propose a general assumption that captures the inter-epoch permutation dependencies. Using the general assumption, we develop a unified framework for permutation-based SGD with arbitrary permutations of examples, incorporating all the aforementioned representative algorithms. Furthermore, we adapt our framework on example ordering in SGD for client ordering in Federated Learning (FL). Specifically, we develop a unified framework for regularized-participation FL with arbitrary permutations of clients.
Problem

Research questions and friction points this paper is trying to address.

Stochastic Gradient Descent
Data Ordering
Federated Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Analysis Framework
Stochastic Gradient Descent (SGD)
Federated Learning Efficiency
🔎 Similar Papers
No similar papers found.
Y
Yipeng Li
Beijing University of Posts and Telecommunications, Beijing, China
Xinchen Lyu
Xinchen Lyu
Beijing University of Posts and Telecommunications
Fog computingEdge cachingSDN
Z
Zhenyu Liu
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China