Random Client Selection on Contrastive Federated Learning for Tabular Data

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy leakage from gradient inversion attacks in contrastive federated learning (CFL), this paper proposes a lightweight defense mechanism based on random client selection, specifically designed for collaborative tabular data modeling in vertical federated learning. Unlike fixed-topology CFL, our approach systematically demonstrates—both theoretically and empirically—that random client sampling significantly enhances robustness against gradient leakage. Experiments across multiple benchmark tabular datasets show that the method reduces attack success rates by over 60%, without compromising model convergence or predictive accuracy. The key contribution lies in revealing and validating client randomization as an effective implicit regularizer for privacy protection in CFL, establishing a novel paradigm for secure federated learning that requires no additional noise injection or communication overhead.

Technology Category

Application Category

📝 Abstract
Vertical Federated Learning (VFL) has revolutionised collaborative machine learning by enabling privacy-preserving model training across multiple parties. However, it remains vulnerable to information leakage during intermediate computation sharing. While Contrastive Federated Learning (CFL) was introduced to mitigate these privacy concerns through representation learning, it still faces challenges from gradient-based attacks. This paper presents a comprehensive experimental analysis of gradient-based attacks in CFL environments and evaluates random client selection as a defensive strategy. Through extensive experimentation, we demonstrate that random client selection proves particularly effective in defending against gradient attacks in the CFL network. Our findings provide valuable insights for implementing robust security measures in contrastive federated learning systems, contributing to the development of more secure collaborative learning frameworks
Problem

Research questions and friction points this paper is trying to address.

Analyzing gradient-based attacks in Contrastive Federated Learning (CFL)
Evaluating random client selection as defense against gradient attacks
Enhancing privacy and security in collaborative tabular data learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random client selection defends gradient attacks
Contrastive Federated Learning enhances privacy
Experimental analysis validates security measures
🔎 Similar Papers
No similar papers found.
A
Achmad Ginanjar
School of Electrical Engineering and Computer Science, The University of Queensland, Queensland, Australia
X
Xue Li
School of Electrical Engineering and Computer Science, The University of Queensland, Queensland, Australia
Priyanka Singh
Priyanka Singh
Lecturer of Cyber Security, University of Queensland
Cyber SecurityMultimedia ForensicsEncrypted Domain Processing
Wen Hua
Wen Hua
The Hong Kong Polytechnic University
DatabaseInformation SystemData MiningDeep Learning