pFedNavi: Structure-Aware Personalized Federated Vision-Language Navigation for Embodied AI

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses privacy concerns in vision-and-language navigation arising from reliance on private trajectory data, as well as the performance limitations of conventional federated learning under highly heterogeneous client environments and instruction styles. To tackle these challenges, the authors propose a structure-aware personalized federated learning framework that dynamically identifies client-specific layers—such as encoder-decoder projection and environment-sensitive decoding layers—and adaptively fuses global and local parameters via layer-wise mixing coefficients. This approach achieves a fine-grained balance between knowledge sharing and personalization. Evaluated on the R2R and RxR benchmarks using ResNet and CLIP visual representations, the method demonstrates significant improvements under non-IID settings: a 7.5% increase in navigation success rate, a 7.8% gain in trajectory fidelity, and a 1.38× acceleration in convergence speed.

Technology Category

Application Category

📝 Abstract
Vision-Language Navigation VLN requires large-scale trajectory instruction data from private indoor environments, raising significant privacy concerns. Federated Learning FL mitigates this by keeping data on-device, but vanilla FL struggles under VLNs'extreme cross-client heterogeneity in environments and instruction styles, making a single global model suboptimal. This paper proposes pFedNavi, a structure-aware and dynamically adaptive personalized federated learning framework tailored for VLN. Our key idea is to personalize where it matters: pFedNavi adaptively identifies client-specific layers via layer-wise mixing coefficients, and performs fine-grained parameter fusion on the selected components (e.g., the encoder-decoder projection and environment-sensitive decoder layers) to balance global knowledge sharing with local specialization. We evaluate pFedNavi on two standard VLN benchmarks, R2R and RxR, using both ResNet and CLIP visual representations. Across all metrics, pFedNavi consistently outperforms the FedAvg-based VLN baseline, achieving up to 7.5% improvement in navigation success rate and up to 7.8% gain in trajectory fidelity, while converging 1.38x faster under non-IID conditions.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Navigation
Federated Learning
Privacy
Cross-client Heterogeneity
Personalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

personalized federated learning
vision-language navigation
layer-wise adaptation
non-IID
parameter fusion
🔎 Similar Papers
No similar papers found.