Diffusion-based Decentralized Federated Multi-Task Representation Learning

๐Ÿ“… 2025-12-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses decentralized multi-task linear regression under data scarcity, where tasks share an underlying low-dimensional linear representation but lack centralized coordination. Method: We propose the first serverless federated multi-task representation learning algorithm based on alternating projected gradient descent. It operates over a diffusion-based network topology, integrating distributed optimization, low-rank matrix recovery, and iterative projection-based updatesโ€”without any central server. Contribution/Results: Theoretically, we establish the first sample complexity lower bound and iteration complexity upper bound, rigorously proving optimal time and communication efficiency. Empirically, our algorithm achieves faster convergence and significantly lower communication overhead compared to state-of-the-art baselines, demonstrating both strong theoretical guarantees and practical efficacy.

Technology Category

Application Category

๐Ÿ“ Abstract
Representation learning is a widely adopted framework for learning in data-scarce environments to obtain a feature extractor or representation from various different yet related tasks. Despite extensive research on representation learning, decentralized approaches remain relatively underexplored. This work develops a decentralized projected gradient descent-based algorithm for multi-task representation learning. We focus on the problem of multi-task linear regression in which multiple linear regression models share a common, low-dimensional linear representation. We present an alternating projected gradient descent and minimization algorithm for recovering a low-rank feature matrix in a diffusion-based decentralized and federated fashion. We obtain constructive, provable guarantees that provide a lower bound on the required sample complexity and an upper bound on the iteration complexity of our proposed algorithm. We analyze the time and communication complexity of our algorithm and show that it is fast and communication-efficient. We performed numerical simulations to validate the performance of our algorithm and compared it with benchmark algorithms.
Problem

Research questions and friction points this paper is trying to address.

Develops decentralized algorithm for multi-task representation learning
Focuses on multi-task linear regression with shared low-dimensional representation
Provides theoretical guarantees on sample and iteration complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized projected gradient descent algorithm
Diffusion-based federated multi-task representation learning
Alternating minimization for low-rank feature recovery
๐Ÿ”Ž Similar Papers
No similar papers found.
Donghwa Kang
Donghwa Kang
KAIST
DNNReal Time SystemSNNAI Security
S
Shana Moothedath
Electrical and Computer Engineering, Iowa State University