Robust Distributed Learning under Resource Constraints: Decentralized Quantile Estimation via (Asynchronous) ADMM

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of achieving efficient and robust decentralized learning on resource-constrained edge devices, where high communication overhead, substantial memory usage, and sensitivity to data corruption are major obstacles. To this end, we propose AsylADMM, a lightweight asynchronous gossip algorithm that requires each node to maintain only two variables, thereby achieving—for the first time—an asynchronous quantile estimation method whose memory footprint is independent of the network degree. Built upon an asynchronous ADMM framework and integrating quantile and rank-based trimming techniques, AsylADMM naturally extends to various robust learning tasks, including quantile clipping, geometric median computation, and depth-based trimming. Theoretical analysis establishes the convergence of its synchronous variant, while experiments demonstrate that AsylADMM converges rapidly and outperforms existing rank-based methods in quantile clipping tasks.

Technology Category

Application Category

📝 Abstract
Specifications for decentralized learning on resource-constrained edge devices require algorithms that are communication-efficient, robust to data corruption, and lightweight in memory usage. While state-of-the-art gossip-based methods satisfy the first requirement, achieving robustness remains challenging. Asynchronous decentralized ADMM-based methods have been explored for estimating the median, a statistical centrality measure that is notoriously more robust than the mean. However, existing approaches require memory that scales with node degree, making them impractical when memory is limited. In this paper, we propose AsylADMM, a novel gossip algorithm for decentralized median and quantile estimation, primarily designed for asynchronous updates and requiring only two variables per node. We analyze a synchronous variant of AsylADMM to establish theoretical guarantees and empirically demonstrate fast convergence for the asynchronous algorithm. We then show that our algorithm enables quantile-based trimming, geometric median estimation, and depth-based trimming, with quantile-based trimming empirically outperforming existing rank-based methods. Finally, we provide a novel theoretical analysis of rank-based trimming via Markov chain theory.
Problem

Research questions and friction points this paper is trying to address.

decentralized learning
resource constraints
robustness
quantile estimation
asynchronous ADMM
Innovation

Methods, ideas, or system contributions that make the work stand out.

AsylADMM
decentralized quantile estimation
asynchronous ADMM
robust distributed learning
communication-efficient gossip algorithm
A
Anna Van Elst
LTCI, Télécom Paris, Institut Polytechnique de Paris, France
Igor Colin
Igor Colin
Télécom Paris
machine learningoptimization
S
Stéphan Clémençon
LTCI, Télécom Paris, Institut Polytechnique de Paris, France