🤖 AI Summary
Existing membership inference attacks (MIAs) in federated learning underutilize update information from non-target clients, limiting detection robustness. Method: We propose the “All for One” attack paradigm—the first to formulate a single-tailed hypothesis test based on the likelihood ratio of multi-client gradient updates, transcending the conventional reliance solely on the target client. By jointly modeling cross-client and cross-round gradient statistics and incorporating federated aggregation dynamics, our approach enables more robust collective inference. Contribution/Results: The method seamlessly integrates with existing MIA frameworks and consistently outperforms state-of-the-art methods on both classification and generative tasks. It maintains high attack success rates under challenging conditions—including Non-IID data distributions, mainstream defenses (e.g., differential privacy, gradient clipping), and heterogeneous federated architectures. The implementation is publicly available.
📝 Abstract
Federated Learning (FL) is a promising approach for training machine learning models on decentralized data while preserving privacy. However, privacy risks, particularly Membership Inference Attacks (MIAs), which aim to determine whether a specific data point belongs to a target client's training set, remain a significant concern. Existing methods for implementing MIAs in FL primarily analyze updates from the target client, focusing on metrics such as loss, gradient norm, and gradient difference. However, these methods fail to leverage updates from non-target clients, potentially underutilizing available information. In this paper, we first formulate a one-tailed likelihood-ratio hypothesis test based on the likelihood of updates from non-target clients. Building upon this formulation, we introduce a three-step Membership Inference Attack (MIA) method, called FedMIA, which follows the"all for one"--leveraging updates from all clients across multiple communication rounds to enhance MIA effectiveness. Both theoretical analysis and extensive experimental results demonstrate that FedMIA outperforms existing MIAs in both classification and generative tasks. Additionally, it can be integrated as an extension to existing methods and is robust against various defense strategies, Non-IID data, and different federated structures. Our code is available in https://github.com/Liar-Mask/FedMIA.