Privacy-Preserving Federated Learning Framework for Risk-Based Adaptive Authentication

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-IID data in decentralized Risk-Based Authentication (RBA) induces model bias, poor generalization, and privacy leakage. Method: We propose the first federated learning framework for RBA that jointly ensures fairness and mitigates cold-start issues. It introduces an IID vectorization technique based on similarity transformations, integrates clustering-assisted risk labeling with differential privacy, employs message authentication codes and gamified security proofs formally verified in the random oracle model, and supports privacy-preserving aggregation and personalized risk modeling over multimodal behavioral features. Contribution/Results: This work establishes the first mathematically provable fairness guarantee for federated aggregation in RBA; significantly enhances model robustness and detection accuracy for high-risk users; and effectively resists model inversion and membership inference attacks under stringent privacy constraints.

Technology Category

Application Category

📝 Abstract
Balancing robust security with strong privacy guarantees is critical for Risk-Based Adaptive Authentication (RBA), particularly in decentralized settings. Federated Learning (FL) offers a promising solution by enabling collaborative risk assessment without centralizing user data. However, existing FL approaches struggle with Non-Independent and Identically Distributed (Non-IID) user features, resulting in biased, unstable, and poorly generalized global models. This paper introduces FL-RBA2, a novel Federated Learning framework for Risk-Based Adaptive Authentication that addresses Non-IID challenges through a mathematically grounded similarity transformation. By converting heterogeneous user features (including behavioral, biometric, contextual, interaction-based, and knowledge-based modalities) into IID similarity vectors, FL-RBA2 supports unbiased aggregation and personalized risk modeling across distributed clients. The framework mitigates cold-start limitations via clustering-based risk labeling, incorporates Differential Privacy (DP) to safeguard sensitive information, and employs Message Authentication Codes (MACs) to ensure model integrity and authenticity. Federated updates are securely aggregated into a global model, achieving strong balance between user privacy, scalability, and adaptive authentication robustness. Rigorous game-based security proofs in the Random Oracle Model formally establish privacy, correctness, and adaptive security guarantees. Extensive experiments on keystroke, mouse, and contextual datasets validate FL-RBA2's effectiveness in high-risk user detection and its resilience to model inversion and inference attacks, even under strong DP constraints.
Problem

Research questions and friction points this paper is trying to address.

Addresses Non-IID data challenges in federated learning for authentication
Balances security and privacy in risk-based adaptive authentication systems
Prevents biased and unstable global models through similarity transformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Similarity transformation for Non-IID data
Differential privacy with secure aggregation
Clustering-based risk labeling for cold-start
🔎 Similar Papers
No similar papers found.
Yaser Baseri
Yaser Baseri
University of Montreal
CybersecurityCryptographyRisk AssessmentMachine LearningQuantum Computing
A
Abdelhakim Senhaji Hafid
Department of Computer Science and Operations Research, University of Montreal, Canada
Dimitrios Makrakis
Dimitrios Makrakis
School of Electrical Engineering and Computer Science, University of Ottawa
NanonetworksBiocimmunicationsCyber-securityWireless nETWORKSOptical Networks
H
Hamidreza Fereidouni
Department of Computer Science and Operations Research, University of Montreal, Canada