🤖 AI Summary
To address two critical challenges in federated learning (FL) for 6G edge intelligence—data reconstruction risks arising from parameter sharing and poor global model generalization to non-IID local data—this paper proposes a novel FL framework integrating personalized differential privacy (PDP) and adaptive neural architecture search (NAS). First, it pioneers the application of PDP at the representation level, replacing raw parameter exchange with sample-level feature sharing to fundamentally mitigate reconstruction attacks. Second, it introduces a privacy-aware federated NAS mechanism that discovers lightweight, client-specific architectures under rigorous differential privacy constraints. Theoretical analysis establishes convergence guarantees. Experiments on CIFAR-10/100 demonstrate that the method achieves a 6.82% accuracy improvement over PerFedRLNAS, reduces model size to 1/10, and cuts communication overhead to 1/20.
📝 Abstract
The Sixth-Generation (6G) network envisions pervasive artificial intelligence (AI) as a core goal, enabled by edge intelligence through on-device data utilization. To realize this vision, federated learning (FL) has emerged as a key paradigm for collaborative training across edge devices. However, the sensitivity and heterogeneity of edge data pose key challenges to FL: parameter sharing risks data reconstruction, and a unified global model struggles to adapt to diverse local distributions. In this paper, we propose a novel federated learning framework that integrates personalized differential privacy (DP) and adaptive model design. To protect training data, we leverage sample-level representations for knowledge sharing and apply a personalized DP strategy to resist reconstruction attacks. To ensure distribution-aware adaptation under privacy constraints, we develop a privacy-aware neural architecture search (NAS) algorithm that generates locally customized architectures and hyperparameters. To the best of our knowledge, this is the first personalized DP solution tailored for representation-based FL with theoretical convergence guarantees. Our scheme achieves strong privacy guarantees for training data while significantly outperforming state-of-the-art methods in model performance. Experiments on benchmark datasets such as CIFAR-10 and CIFAR-100 demonstrate that our scheme improves accuracy by 6.82% over the federated NAS method PerFedRLNAS, while reducing model size to 1/10 and communication cost to 1/20.