Behavioral Anomaly Detection in Distributed Systems via Federated Contrastive Learning

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address behavioral anomaly detection in privacy-sensitive distributed systems, this paper proposes a Federated Contrastive Learning framework for Anomaly Detection (FCL-AD). FCL-AD enables decentralized, privacy-preserving feature representation learning without sharing raw logs, metrics, or system call data: each client extracts multi-source behavioral features via local encoders and jointly optimizes contrastive loss—enhancing fine-grained anomaly discrimination—and classification loss, with global model updates performed via federated aggregation. Its key innovation lies in embedding contrastive learning into the federated architecture. Extensive experiments on real-world attack scenarios and dynamic data streams demonstrate that FCL-AD significantly outperforms state-of-the-art methods, achieving a +3.2% improvement in F1-score, a 28% reduction in response latency, and superior cross-node generalization. The framework thus provides an efficient, trustworthy solution for intelligent, privacy-aware security monitoring in distributed environments.

Technology Category

Application Category

📝 Abstract
This paper addresses the increasingly prominent problem of anomaly detection in distributed systems. It proposes a detection method based on federated contrastive learning. The goal is to overcome the limitations of traditional centralized approaches in terms of data privacy, node heterogeneity, and anomaly pattern recognition. The proposed method combines the distributed collaborative modeling capabilities of federated learning with the feature discrimination enhancement of contrastive learning. It builds embedding representations on local nodes and constructs positive and negative sample pairs to guide the model in learning a more discriminative feature space. Without exposing raw data, the method optimizes a global model through a federated aggregation strategy. Specifically, the method uses an encoder to represent local behavior data in high-dimensional space. This includes system logs, operational metrics, and system calls. The model is trained using both contrastive loss and classification loss to improve its ability to detect fine-grained anomaly patterns. The method is evaluated under multiple typical attack types. It is also tested in a simulated real-time data stream scenario to examine its responsiveness. Experimental results show that the proposed method outperforms existing approaches across multiple performance metrics. It demonstrates strong detection accuracy and adaptability, effectively addressing complex anomalies in distributed environments. Through careful design of key modules and optimization of the training mechanism, the proposed method achieves a balance between privacy preservation and detection performance. It offers a feasible technical path for intelligent security management in distributed systems.
Problem

Research questions and friction points this paper is trying to address.

Detect anomalies in distributed systems privately
Overcome data privacy and heterogeneity limitations
Enhance anomaly pattern recognition accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning for distributed anomaly detection
Contrastive learning enhances feature discrimination
Privacy-preserving global model optimization
🔎 Similar Papers
No similar papers found.
Renzi Meng
Renzi Meng
Northeastern university
Computer science
H
Heyi Wang
Illinois Institute of Technology, Chicago, USA
Y
Yumeng Sun
Rochester Institute of Technology, Rochester, USA
R
Renhan Zhang
University of Michigan, Ann Arbor, USA
L
Lian Lian
University of Southern California, Los Angeles, USA