🤖 AI Summary
Off-road autonomous driving is hindered by the lack of large-scale, high-quality datasets and standardized benchmarks. To address this, we introduce ORAD-3D—the first large-scale, multimodal 3D dataset explicitly designed for unstructured terrain, encompassing diverse off-road environments, challenging weather conditions, and varying illumination, with centimeter-level GPS/IMU-calibrated ground-truth ego-poses. We establish a comprehensive benchmark comprising five core tasks: 2D free-space detection, 3D occupancy prediction, vision-language-driven navigation, multi-sensor fusion perception, and end-to-end planning. Our benchmark integrates sensor-fusion data acquisition, vision-language models, and world-model-based planning architectures. All components—including the dataset, benchmark platform, and implementation code—are fully open-sourced. ORAD-3D significantly advances the training, evaluation, and development of perception, scene understanding, and decision-making models for off-road autonomy.
📝 Abstract
A major bottleneck in off-road autonomous driving research lies in the scarcity of large-scale, high-quality datasets and benchmarks. To bridge this gap, we present ORAD-3D, which, to the best of our knowledge, is the largest dataset specifically curated for off-road autonomous driving. ORAD-3D covers a wide spectrum of terrains, including woodlands, farmlands, grasslands, riversides, gravel roads, cement roads, and rural areas, while capturing diverse environmental variations across weather conditions (sunny, rainy, foggy, and snowy) and illumination levels (bright daylight, daytime, twilight, and nighttime). Building upon this dataset, we establish a comprehensive suite of benchmark evaluations spanning five fundamental tasks: 2D free-space detection, 3D occupancy prediction, rough GPS-guided path planning, vision-language model-driven autonomous driving, and world model for off-road environments. Together, the dataset and benchmarks provide a unified and robust resource for advancing perception and planning in challenging off-road scenarios. The dataset and code will be made publicly available at https://github.com/chaytonmin/ORAD-3D.