Mobility-Aware Multi-Task Decentralized Federated Learning for Vehicular Networks: Modeling, Analysis, and Optimization

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address resource allocation challenges in vehicular networks—arising from high vehicle mobility, resource constraints, and concurrent multi-task workloads—this paper proposes a mobility-aware decentralized multi-task federated learning framework that jointly optimizes task scheduling, subcarrier allocation, and leader selection. We formulate the problem for the first time as a resource game with provable Nash equilibrium and innovatively recast it as a decentralized partially observable Markov decision process (DEC-POMDP). To solve it, we design a heterogeneous-agent proximal policy optimization (HAPPO) algorithm. We theoretically derive a convergence bound for single-task training, ensuring stability. Experimental results demonstrate that, compared to baseline methods, our approach improves model accuracy by 12.7%, reduces communication overhead by 28.4%, and increases task completion rate by 31.5%.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) is a promising paradigm that can enable collaborative model training between vehicles while protecting data privacy, thereby significantly improving the performance of intelligent transportation systems (ITSs). In vehicular networks, due to mobility, resource constraints, and the concurrent execution of multiple training tasks, how to allocate limited resources effectively to achieve optimal model training of multiple tasks is an extremely challenging issue. In this paper, we propose a mobility-aware multi-task decentralized federated learning (MMFL) framework for vehicular networks. By this framework, we address task scheduling, subcarrier allocation, and leader selection, as a joint optimization problem, termed as TSLP. For the case with a single FL task, we derive the convergence bound of model training. For general cases, we first model TSLP as a resource allocation game, and prove the existence of a Nash equilibrium (NE). Then, based on this proof, we reformulate the game as a decentralized partially observable Markov decision process (DEC-POMDP), and develop an algorithm based on heterogeneous-agent proximal policy optimization (HAPPO) to solve DEC-POMDP. Finally, numerical results are used to demonstrate the effectiveness of the proposed algorithm.
Problem

Research questions and friction points this paper is trying to address.

Optimize resource allocation in vehicular networks for multi-task federated learning.
Address mobility and resource constraints in decentralized federated learning.
Develop algorithms for efficient task scheduling and model training convergence.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mobility-aware multi-task decentralized federated learning framework
Resource allocation game with Nash equilibrium proof
HAPPO algorithm for DEC-POMDP optimization
🔎 Similar Papers
No similar papers found.
D
Dongyu Chen
School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China
T
Tao Deng
School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China
H
He Huang
School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China
Juncheng Jia
Juncheng Jia
Soochow University
Edge IntelligenceFederated LearningInternet of ThingsMobile Computing
M
Mianxiong Dong
Department of Sciences and Informatics, Muroran Institute of Technology, Muroran 050-8585, Japan
Di Yuan
Di Yuan
Department of Information Technology, Uppsala University, 751 05 Uppsala, Sweden
Keqin Li
Keqin Li
AMA University
RoboticMachine learningArtificial intelligenceComputer vision