GrandTour: A Legged Robotics Dataset in the Wild for Multi-Modal Perception and State Estimation

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of large-scale, publicly available multimodal datasets for legged robots operating in real-world complex environments. To bridge this gap, we present and open-source the largest outdoor dataset for legged robotics to date, encompassing diverse terrains such as alpine regions, forests, ruins, and urban settings. The dataset provides high-precision ground-truth trajectories—fused from RTK-GNSS and total station measurements—alongside synchronized multimodal sensor streams, including spinning LiDAR, multiple RGB cameras, a stereo depth camera, and proprioceptive data. Collected using an ANYmal-D quadrupedal robot equipped with the Boxi multimodal sensor suite, the dataset is publicly released on Hugging Face and ROS platforms, establishing a robust benchmark for state estimation, SLAM, and multimodal perception algorithms.

Technology Category

Application Category

📝 Abstract
Accurate state estimation and multi-modal perception are prerequisites for autonomous legged robots in complex, large-scale environments. To date, no large-scale public legged-robot dataset captures the real-world conditions needed to develop and benchmark algorithms for legged-robot state estimation, perception, and navigation. To address this, we introduce the GrandTour dataset, a multi-modal legged-robotics dataset collected across challenging outdoor and indoor environments, featuring an ANYbotics ANYmal-D quadruped equipped with the \boxi multi-modal sensor payload. GrandTour spans a broad range of environments and operational scenarios across distinct test sites, ranging from alpine scenery and forests to demolished buildings and urban areas, and covers a wide variation in scale, complexity, illumination, and weather conditions. The dataset provides time-synchronized sensor data from spinning LiDARs, multiple RGB cameras with complementary characteristics, proprioceptive sensors, and stereo depth cameras. Moreover, it includes high-precision ground-truth trajectories from satellite-based RTK-GNSS and a Leica Geosystems total station. This dataset supports research in SLAM, high-precision state estimation, and multi-modal learning, enabling rigorous evaluation and development of new approaches to sensor fusion in legged robotic systems. With its extensive scope, GrandTour represents the largest open-access legged-robotics dataset to date. The dataset is available at https://grand-tour.leggedrobotics.com, on HuggingFace (ROS-independent), and in ROS formats, along with tools and demo resources.
Problem

Research questions and friction points this paper is trying to address.

legged robotics
state estimation
multi-modal perception
dataset
autonomous navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

legged robotics
multi-modal perception
state estimation
large-scale dataset
sensor fusion
🔎 Similar Papers
No similar papers found.