🤖 AI Summary
Federated learning (FL) for person re-identification (Re-ID) faces dual challenges: statistical heterogeneity induced by non-IID data and prohibitive communication overhead from large models. To address these, this paper proposes a lightweight adaptive pruning framework. Our method introduces three key innovations: (1) KL-divergence regularization to enforce consistency of local feature distributions across clients; (2) a distribution-similarity-based weighted aggregation mechanism to improve model fusion; and (3) an integrated strategy combining sparse activation skipping with cross-round recovery for dynamic pruning control and parameter-efficient aggregation. Extensive experiments on eight benchmark Re-ID datasets demonstrate that, compared to state-of-the-art methods, our approach reduces communication costs by 33–38% on ResNet-50 and by 20–40% on ResNet-34, while incurring ≤1% accuracy degradation. The framework significantly enhances both communication efficiency and convergence stability.
📝 Abstract
Person re-identification (Re-ID) is a fundamental task in intelligent surveillance and public safety. Federated learning (FL) offers a privacy-preserving solution by enabling collaborative model training without centralized data collection. However, applying FL to real-world re-ID systems faces two major challenges: statistical heterogeneity across clients due to non-IID data distributions, and substantial communication overhead caused by frequent transmission of large-scale models. To address these issues, we propose FedKLPR, a lightweight and communication-efficient federated learning framework for person re-identification. FedKLPR introduces four key components. First, the KL-Divergence Regularization Loss (KLL) constrains local models by minimizing the divergence from the global feature distribution, effectively mitigating the effects of statistical heterogeneity and improving convergence stability under non-IID conditions. Secondly, KL-Divergence-Prune Weighted Aggregation (KLPWA) integrates pruning ratio and distributional similarity into the aggregation process, thereby improving the robustness of the global model while significantly reducing communication overhead. Furthermore, sparse Activation Skipping (SAS) mitigates the dilution of critical parameters during the aggregation of pruned client models by excluding zero-valued weights from the update process. Finally, Cross-Round Recovery (CRR) introduces a dynamic pruning control mechanism that halts pruning when necessary, enabling deeper compression while maintaining model accuracy. Experimental results on eight benchmark datasets demonstrate that FedKLPR achieves significant communication reduction. Compared with the state-of-the-art, FedKLPR reduces 33%-38% communication cost on ResNet-50 and 20%-40% communication cost on ResNet-34, while maintaining model accuracy within 1% degradation.