OLALa: Online Learned Adaptive Lattice Codes for Heterogeneous Federated Learning

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, communication overhead from model update transmission is substantial—especially under data and system heterogeneity—where static quantization strategies fail to adapt to dynamically shifting update distributions. To address this, we propose OLALa (Online Adaptive Lattice Quantization), the first framework to introduce an online-learnable adaptive quantizer into federated learning. Each client independently optimizes dithered lattice quantization parameters in real time via a lightweight online algorithm, leveraging only local update statistics, and exchanges only compact quantization metadata—eliminating the need for global coordination. OLALa significantly reduces communication load while preserving convergence guarantees and model accuracy. Experiments across diverse quantization rates demonstrate that OLALa consistently outperforms fixed-codebook and non-adaptive baselines, achieving higher test accuracy and faster convergence on CIFAR-10, CIFAR-100, and Tiny-ImageNet.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative training across distributed clients without sharing raw data, often at the cost of substantial communication overhead induced by transmitting high-dimensional model updates. This overhead can be alleviated by having the clients quantize their model updates, with dithered lattice quantizers identified as an attractive scheme due to its structural simplicity and convergence-preserving properties. However, existing lattice-based FL schemes typically rely on a fixed quantization rule, which is suboptimal in heterogeneous and dynamic environments where the model updates distribution varies across users and training rounds. In this work, we propose Online Learned Adaptive Lattices (OLALa), a heterogeneous FL framework where each client can adjust its quantizer online using lightweight local computations. We first derive convergence guarantees for FL with non-fixed lattice quantizers and show that proper lattice adaptation can tighten the convergence bound. Then, we design an online learning algorithm that enables clients to tune their quantizers throughout the FL process while exchanging only a compact set of quantization parameters. Numerical experiments demonstrate that OLALa consistently improves learning performance under various quantization rates, outperforming conventional fixed-codebook and non-adaptive schemes.
Problem

Research questions and friction points this paper is trying to address.

Reduces communication overhead in federated learning via quantization
Adapts lattice quantizers dynamically for heterogeneous FL environments
Improves learning performance with online-tuned quantization parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online adaptive lattice quantization for FL
Lightweight local computation for quantizer tuning
Convergence guarantees for non-fixed quantizers
🔎 Similar Papers
No similar papers found.