United We Defend: Collaborative Membership Inference Defenses in Federated Learning

πŸ“… 2026-01-11
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of existing standalone defense mechanisms in federated learning against trajectory-based membership inference attacks (MIAs), particularly in heterogeneous settings where clients exhibit diverse privacy and utility requirements. To this end, we propose CoFedMID, a collaborative defense framework that introduces, for the first time, a client-cooperative defense paradigm. CoFedMID establishes a defense coalition through three core techniques: class-guided sample partitioning, utility-aware compensation, and aggregation-neutral perturbation. This approach jointly mitigates the model’s memorization of training samples while harmonizing privacy preservation with model utility. Extensive experiments demonstrate that CoFedMID significantly reduces the success rates of seven representative MIA variants across three benchmark datasets, incurs only marginal utility degradation, and maintains robust performance under diverse system configurations.

Technology Category

Application Category

πŸ“ Abstract
Membership inference attacks (MIAs), which determine whether a specific data point was included in the training set of a target model, have posed severe threats in federated learning (FL). Unfortunately, existing MIA defenses, typically applied independently to each client in FL, are ineffective against powerful trajectory-based MIAs that exploit temporal information throughout the training process to infer membership status. In this paper, we investigate a new FL defense scenario driven by heterogeneous privacy needs and privacy-utility trade-offs, where only a subset of clients are defended, as well as a collaborative defense mode where clients cooperate to mitigate membership privacy leakage. To this end, we introduce CoFedMID, a collaborative defense framework against MIAs in FL, which limits local model memorization of training samples and, through a defender coalition, enhances privacy protection and model utility. Specifically, CoFedMID consists of three modules: a class-guided partition module for selective local training samples, a utility-aware compensation module to recycle contributive samples and prevent their overconfidence, and an aggregation-neutral perturbation module that injects noise for cancellation at the coalition level into client updates. Extensive experiments on three datasets show that our defense framework significantly reduces the performance of seven MIAs while incurring only a small utility loss. These results are consistently verified across various defense settings.
Problem

Research questions and friction points this paper is trying to address.

membership inference attacks
federated learning
privacy leakage
collaborative defense
trajectory-based attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

collaborative defense
membership inference attacks
federated learning
defender coalition
aggregation-neutral perturbation
πŸ”Ž Similar Papers
No similar papers found.
L
Li Bai
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
J
Junxu Liu
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University; PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems
Sen Zhang
Sen Zhang
Hong Kong Polytechnic University
Graph data managementData privacy protection
X
Xinwei Zhang
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Qingqing Ye
Qingqing Ye
Assistant Professor, The Hong Kong Polytechnic University
data privacy and securityadversarial machine learning
Haibo Hu
Haibo Hu
Professor, Hong Kong Polytechnic University
Data privacy and securityadversarial machine learningmobile and spatiotemporal databases