Discovering Antagonists in Networks of Systems: Robot Deployment

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time detection of adversarial agents in robotic swarm coverage tasks remains challenging due to the lack of prior knowledge about attack patterns. Method: This paper proposes a context-aware physical motion anomaly detection framework that relies solely on simulation of normal behavior. It employs a normalized flow-based unsupervised model to capture spatiotemporal motion patterns, leveraging contextual information for robust anomaly scoring—without requiring adversarial samples or labeled attack data. Contribution/Results: Evaluated under five representative adversarial strategies, the method achieves significantly higher average detection accuracy than state-of-the-art approaches. Empirical results show ≥80% recall per class and <5% false positive rate, with strong consistency between software simulation and hardware deployment. Crucially, this work presents the first experimental validation of such an anomaly detection scheme on real robotic hardware, demonstrating both efficacy and practical deployability in resource-constrained swarm systems.

Technology Category

Application Category

📝 Abstract
A contextual anomaly detection method is proposed and applied to the physical motions of a robot swarm executing a coverage task. Using simulations of a swarm's normal behavior, a normalizing flow is trained to predict the likelihood of a robot motion within the current context of its environment. During application, the predicted likelihood of the observed motions is used by a detection criterion that categorizes a robot agent as normal or antagonistic. The proposed method is evaluated on five different strategies of antagonistic behavior. Importantly, only readily available simulated data of normal robot behavior is used for training such that the nature of the anomalies need not be known beforehand. The best detection criterion correctly categorizes at least 80% of each antagonistic type while maintaining a false positive rate of less than 5% for normal robot agents. Additionally, the method is validated in hardware experiments, yielding results similar to the simulated scenarios. Compared to the state-of-the-art approach, both the predictive performance of the normalizing flow and the robustness of the detection criterion are increased.
Problem

Research questions and friction points this paper is trying to address.

Detecting antagonistic robots in swarms
Using normalizing flow for anomaly detection
Evaluating on five antagonistic behavior strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contextual anomaly detection method
Normalizing flow for motion prediction
Detection criterion for antagonistic behavior
🔎 Similar Papers
No similar papers found.
I
Ingeborg Wenger
Institute of Engineering and Computational Mechanics, University of Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany
Peter Eberhard
Peter Eberhard
Professor, University of Stuttgart, Germany
Henrik Ebel
Henrik Ebel
LUT University
Multibody System DynamicsApplied MechanicsControl EngineeringOptimizationRobotics