π€ AI Summary
This work addresses a critical privacy vulnerability in low-altitude wireless networks, where existing federated unlearning methods lack client-side verifiability, potentially allowing untrusted servers to retain data from departing clients. To resolve this issue, the paper proposes VerFUβthe first federated unlearning framework that enables client-verifiable guarantees. VerFU uniquely integrates linear homomorphic hashing with commitment schemes to construct tamper-proof records of model updates. Leveraging the linear composability inherent in federated learning updates, the framework supports parallel unlearning across multiple clients and efficient verification. Experimental results demonstrate that VerFU achieves strong model utility while significantly reducing communication and verification overhead, thereby enabling efficient, secure, and verifiable federated unlearning.
π Abstract
In low-altitude wireless networks (LAWN), federated learning (FL) enables collaborative intelligence among unmanned aerial vehicles (UAVs) and integrated sensing and communication (ISAC) devices while keeping raw sensing data local. Due to the "right to be forgotten" requirements and the high mobility of ISAC devices that frequently enter or leave the coverage region of UAV-assisted servers, the influence of departing devices must be removed from trained models. This necessity motivates the adoption of federated unlearning (FUL) to eliminate historical device contributions from the global model in LAWN. However, existing FUL approaches implicitly assume that the UAV-assisted server executes unlearning operations honestly. Without client-verifiable guarantees, an untrusted server may retain residual device information, leading to potential privacy leakage and undermining trust. To address this issue, we propose VerFU, a privacy-preserving and client-verifiable federated unlearning framework designed for LAWN. It empowers ISAC devices to validate the server-side unlearning operations without relying on original data samples. By integrating linear homomorphic hash (LHH) with commitment schemes, VerFU constructs tamper-proof records of historical updates. ISAC devices ensure the integrity of unlearning results by verifying decommitment parameters and utilizing the linear composability of LHH to check whether the global model accurately removes their historical contributions. Furthermore, VerFU is capable of efficiently processing parallel unlearning requests and verification from multiple ISAC devices. Experimental results demonstrate that our framework efficiently preserves model utility post-unlearning while maintaining low communication and verification overhead.