🤖 AI Summary
Existing robotic datasets are predominantly confined to structured urban environments, limiting their utility for localization and depth perception in unstructured natural scenes. To address this gap, this work proposes WildCross—the first large-scale, cross-modal benchmark tailored for natural environments—comprising 476,000 temporally and spatially synchronized frames of RGB images, semi-dense depth maps, surface normals, 6DoF poses, and dense LiDAR submaps. WildCross fills a critical void in multimodal, aligned data for wild settings and supports vision-only, LiDAR-only, and cross-modal joint tasks. Experiments demonstrate that WildCross effectively advances research in place recognition and metric depth estimation, establishing a new benchmark and challenging platform for robust 3D perception in natural environments.
📝 Abstract
Recent years have seen a significant increase in demand for robotic solutions in unstructured natural environments, alongside growing interest in bridging 2D and 3D scene understanding. However, existing robotics datasets are predominantly captured in structured urban environments, making them inadequate for addressing the challenges posed by complex, unstructured natural settings. To address this gap, we propose WildCross, a cross-modal benchmark for place recognition and metric depth estimation in large-scale natural environments. WildCross comprises over 476K sequential RGB frames with semi-dense depth and surface normal annotations, each aligned with accurate 6DoF poses and synchronized dense lidar submaps. We conduct comprehensive experiments on visual, lidar, and cross-modal place recognition, as well as metric depth estimation, demonstrating the value of WildCross as a challenging benchmark for multi-modal robotic perception tasks. We provide access to the code repository and dataset at https://csiro-robotics.github.io/WildCross.