🤖 AI Summary
This paper addresses decentralized optimization for multi-agent systems on a connected undirected network, where agents collaboratively minimize the sum of nonconvex, smooth local objective functions defined on a compact Riemannian submanifold, under bandwidth-constrained communication. To this end, we propose Quantized Riemannian Gradient Tracking (Q-RGT), the first algorithm achieving an $O(1/K)$ convergence rate under quantized communication—matching the optimal rate of unquantized methods. Q-RGT avoids computationally expensive exact Riemannian projections (e.g., exponential maps or retractions) and relies only on lightweight retractions. We provide theoretical lower bounds explicitly characterizing the trade-off between quantization precision and consensus error. Experiments demonstrate that Q-RGT significantly reduces both communication overhead and computational cost while maintaining performance comparable to state-of-the-art unquantized baselines.
📝 Abstract
This paper considers the problem of decentralized optimization on compact submanifolds, where a finite sum of smooth (possibly non-convex) local functions is minimized by <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula> agents forming an undirected and connected graph. However, the efficiency of distributed optimization is often hindered by communication bottlenecks. To mitigate this, we propose the Quantized Riemannian Gradient Tracking (Q-RGT) algorithm, where agents update their local variables using quantized gradients. The introduction of quantization noise allows our algorithm to bypass the constraints of the accurate Riemannian projection operator (such as retraction), further improving iterative efficiency. To the best of our knowledge, this is the first algorithm to achieve an <inline-formula><tex-math notation="LaTeX">$mathcal{O}(1/K)$</tex-math></inline-formula> convergence rate in the presence of quantization, matching the convergence rate of methods without quantization. Additionally, we explicitly derive lower bounds on decentralized consensus associated with a function of quantization levels. Numerical experiments demonstrate that Q-RGT performs comparably to non-quantized methods while reducing communication bottlenecks and computational overhead.