CU-Multi: A Dataset for Multi-Robot Collaborative Perception

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-robot collaborative perception suffers from a critical bottleneck: the absence of realistic, standardized benchmark datasets. Existing collaborative semantic-aware mapping (C-SAM) evaluations predominantly rely on single-robot trajectory segmentation, failing to capture the intrinsic nature of team coordination and yielding non-comparable results. To address this, we introduce the first large-scale, multi-robot dataset explicitly designed for collaborative perception evaluation. Data were collected synchronously across multiple days and sites using RGB-D cameras, semantic LiDARs, RTK-GPS, and high-precision odometry. We implement a controllable trajectory-overlap mechanism and provide dense, frame-level semantic annotations, achieving millisecond-level temporal synchronization and centimeter-level spatial alignment. Four high-quality synchronized sequences are released, covering diverse dynamic scenarios and including ground-truth trajectories. This dataset bridges a fundamental gap in reproducible, verifiable evaluation for multi-robot collaborative perception, significantly enhancing the fairness and reliability of algorithmic benchmarking.

Technology Category

Application Category

📝 Abstract
A central challenge for multi-robot systems is fusing independently gathered perception data into a unified representation. Despite progress in Collaborative SLAM (C-SLAM), benchmarking remains hindered by the scarcity of dedicated multi-robot datasets. Many evaluations instead partition single-robot trajectories, a practice that may only partially reflect true multi-robot operations and, more critically, lacks standardization, leading to results that are difficult to interpret or compare across studies. While several multi-robot datasets have recently been introduced, they mostly contain short trajectories with limited inter-robot overlap and sparse intra-robot loop closures. To overcome these limitations, we introduce CU-Multi, a dataset collected over multiple days at two large outdoor sites on the University of Colorado Boulder campus. CU-Multi comprises four synchronized runs with aligned start times and controlled trajectory overlap, replicating the distinct perspectives of a robot team. It includes RGB-D sensing, RTK GPS, semantic LiDAR, and refined ground-truth odometry. By combining overlap variation with dense semantic annotations, CU-Multi provides a strong foundation for reproducible evaluation in multi-robot collaborative perception tasks.
Problem

Research questions and friction points this paper is trying to address.

Fusing independently gathered perception data into unified representation
Benchmarking hindered by scarcity of dedicated multi-robot datasets
Existing datasets lack standardization and sufficient trajectory overlap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-day synchronized dataset with controlled overlap
Combines RGB-D, RTK GPS, and semantic LiDAR sensing
Provides dense semantic annotations for reproducible evaluation
🔎 Similar Papers
No similar papers found.