🤖 AI Summary
To address the high communication overhead and vulnerability to gradient inversion attacks in distributed learning, this paper proposes LQ-SGD—a novel gradient compression algorithm integrating low-rank approximation and logarithmic quantization within the PowerSGD framework. Theoretically, LQ-SGD achieves a compression ratio of *O(d/r)*, where *d* is the gradient dimension and *r ≪ d* is the rank, substantially reducing bandwidth requirements. Crucially, it preserves the convergence rate and model accuracy of vanilla SGD. Moreover, the combined nonlinearity of logarithmic quantization and structural constraints of low-rank approximation significantly enhance robustness against gradient inversion attacks, improving system security. Extensive experiments on benchmark tasks demonstrate that LQ-SGD outperforms SGD and state-of-the-art compression methods—including Top-*k* and QSGD—in communication efficiency, convergence stability, and resilience to inversion attacks.
📝 Abstract
We propose LQ-SGD (Low-Rank Quantized Stochastic Gradient Descent), an efficient communication gradient compression algorithm designed for distributed training. LQ-SGD further develops on the basis of PowerSGD by incorporating the low-rank approximation and log-quantization techniques, which drastically reduce the communication overhead, while still ensuring the convergence speed of training and model accuracy. In addition, LQ-SGD and other compression-based methods show stronger resistance to gradient inversion than traditional SGD, providing a more robust and efficient optimization path for distributed learning systems.