π€ AI Summary
The absence of standardized evaluation benchmarks for collaborative simultaneous localization and mapping (C-SLAM) hinders fair comparison and reproducibility of distributed optimization algorithms. To address this, we propose and open-source COSMO-Benchβthe first benchmark specifically designed for evaluating distributed C-SLAM optimization methods. COSMO-Bench comprises 24 multi-robot collaborative scenarios, generated using a state-of-the-art C-SLAM frontend and real-world LiDAR point cloud data, each accompanied by high-accuracy ground-truth trajectories and environment maps. All datasets undergo rigorous sensor calibration and validation to ensure fidelity and consistency. The benchmark enables cross-algorithmic fairness, full reproducibility, and is permanently archived with a DOI. By establishing a unified, scalable, and reproducible evaluation framework, COSMO-Bench fills a critical gap in C-SLAM research and provides a foundational platform for algorithm design, performance analysis, and system-level validation.
π Abstract
Recent years have seen a focus on research into distributed optimization algorithms for multi-robot Collaborative Simultaneous Localization and Mapping (C-SLAM). Research in this domain, however, is made difficult by a lack of standard benchmark datasets. Such datasets have been used to great effect in the field of single-robot SLAM, and researchers focused on multi-robot problems would benefit greatly from dedicated benchmark datasets. To address this gap, we design and release the Collaborative Open-Source Multi-robot Optimization Benchmark (COSMO-Bench) -- a suite of 24 datasets derived from a state-of-the-art C-SLAM front-end and real-world LiDAR data. Data DOI: https://doi.org/10.1184/R1/29652158