🤖 AI Summary
Existing visual SLAM systems lack a systematic framework for evaluating robustness under adversarial environmental conditions such as fog and rain. This work proposes a modular and extensible evaluation framework that decouples datasets, perturbation models, and SLAM algorithms, enabling precise control over perturbation intensity in real-world units—such as visibility in meters—and supporting configurable perturbations including weather effects, camera artifacts, and video transmission degradations. The framework incorporates an automated failure-point search mechanism and conducts comprehensive evaluations of seven state-of-the-art SLAM algorithms across three benchmark datasets, accurately identifying their performance boundaries and failure thresholds under diverse adverse conditions.
📝 Abstract
We present SAL (SLAM Adversarial Lab), a modular framework for evaluating visual SLAM systems under adversarial conditions such as fog and rain. SAL represents each adversarial condition as a perturbation that transforms an existing dataset into an adversarial dataset. When transforming a dataset, SAL supports severity levels using easily-interpretable real-world units such as meters for fog visibility. SAL's extensible architecture decouples datasets, perturbations, and SLAM algorithms through common interfaces, so users can add new components without rewriting integration code. Moreover, SAL includes a search procedure that finds the severity level of a perturbation at which a SLAM system fails. To showcase the capabilities of SAL, our evaluation integrates seven SLAM algorithms and evaluates them across three datasets under weather, camera, and video transport perturbations.