COSMO-Bench: A Benchmark for Collaborative SLAM Optimization

πŸ“… 2025-08-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
The absence of standardized evaluation benchmarks for collaborative simultaneous localization and mapping (C-SLAM) hinders fair comparison and reproducibility of distributed optimization algorithms. To address this, we propose and open-source COSMO-Benchβ€”the first benchmark specifically designed for evaluating distributed C-SLAM optimization methods. COSMO-Bench comprises 24 multi-robot collaborative scenarios, generated using a state-of-the-art C-SLAM frontend and real-world LiDAR point cloud data, each accompanied by high-accuracy ground-truth trajectories and environment maps. All datasets undergo rigorous sensor calibration and validation to ensure fidelity and consistency. The benchmark enables cross-algorithmic fairness, full reproducibility, and is permanently archived with a DOI. By establishing a unified, scalable, and reproducible evaluation framework, COSMO-Bench fills a critical gap in C-SLAM research and provides a foundational platform for algorithm design, performance analysis, and system-level validation.

Technology Category

Application Category

πŸ“ Abstract
Recent years have seen a focus on research into distributed optimization algorithms for multi-robot Collaborative Simultaneous Localization and Mapping (C-SLAM). Research in this domain, however, is made difficult by a lack of standard benchmark datasets. Such datasets have been used to great effect in the field of single-robot SLAM, and researchers focused on multi-robot problems would benefit greatly from dedicated benchmark datasets. To address this gap, we design and release the Collaborative Open-Source Multi-robot Optimization Benchmark (COSMO-Bench) -- a suite of 24 datasets derived from a state-of-the-art C-SLAM front-end and real-world LiDAR data. Data DOI: https://doi.org/10.1184/R1/29652158
Problem

Research questions and friction points this paper is trying to address.

Lack of standard benchmark datasets for multi-robot SLAM
Difficulty in evaluating distributed optimization algorithms
Need for dedicated collaborative SLAM evaluation resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed benchmark datasets for C-SLAM
Created 24 datasets using LiDAR data
Designed open-source multi-robot optimization benchmark
πŸ”Ž Similar Papers
No similar papers found.