Trustworthy Efficient Communication for Distributed Learning using LQ-SGD Algorithm

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead and vulnerability to gradient inversion attacks in distributed learning, this paper proposes LQ-SGD—a novel gradient compression algorithm integrating low-rank approximation and logarithmic quantization within the PowerSGD framework. Theoretically, LQ-SGD achieves a compression ratio of *O(d/r)*, where *d* is the gradient dimension and *r ≪ d* is the rank, substantially reducing bandwidth requirements. Crucially, it preserves the convergence rate and model accuracy of vanilla SGD. Moreover, the combined nonlinearity of logarithmic quantization and structural constraints of low-rank approximation significantly enhance robustness against gradient inversion attacks, improving system security. Extensive experiments on benchmark tasks demonstrate that LQ-SGD outperforms SGD and state-of-the-art compression methods—including Top-*k* and QSGD—in communication efficiency, convergence stability, and resilience to inversion attacks.

Technology Category

Application Category

📝 Abstract
We propose LQ-SGD (Low-Rank Quantized Stochastic Gradient Descent), an efficient communication gradient compression algorithm designed for distributed training. LQ-SGD further develops on the basis of PowerSGD by incorporating the low-rank approximation and log-quantization techniques, which drastically reduce the communication overhead, while still ensuring the convergence speed of training and model accuracy. In addition, LQ-SGD and other compression-based methods show stronger resistance to gradient inversion than traditional SGD, providing a more robust and efficient optimization path for distributed learning systems.
Problem

Research questions and friction points this paper is trying to address.

Reducing communication overhead in distributed training
Ensuring convergence speed and model accuracy
Enhancing resistance to gradient inversion attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank approximation reduces communication overhead
Log-quantization enhances gradient compression efficiency
Resists gradient inversion better than traditional SGD
🔎 Similar Papers
No similar papers found.
H
Hongyang Li
University of Luxembourg, Luxembourg
C
Caesar Wu
University of Luxembourg, Luxembourg
Pascal Bouvry
Pascal Bouvry
Professor of Computer Science, University of Luxembourg
OptimisationCloud/Distributed/Parallel ComputingAd Hoc networks
L
Lincen Bai
University of Paris-Saclay, France
S
Said Mammar
University of Paris-Saclay, France
M
Mohammed Chadli
University of Paris-Saclay, France