🤖 AI Summary
Lunar high-latitude regions present extreme visual challenges—including high dynamic range, elongated shadows, and near-total darkness—due to low solar elevation angles, severely degrading conventional vision-based perception.
Method: This work introduces the first multimodal robotic perception dataset specifically designed for complex lunar illumination conditions. It innovatively integrates a single-photon avalanche diode (SPAD) camera for high-sensitivity, high-speed imaging under ultra-low-light conditions, synchronized with stereo RGB cameras, an inertial measurement unit (IMU), and wheel encoders. Data were collected across diverse trajectories and illumination regimes—from dawn to night—under controlled rover motion.
Contribution/Results: The dataset comprises 88 sequences and 1.3 million precisely timestamp-aligned images, captured at rover speeds of 5–50 cm/s. It is the first publicly available benchmark addressing high-latitude lunar surface perception under degraded visual conditions, enabling rigorous evaluation of autonomous navigation and scientific imaging algorithms in extreme illumination scenarios.
📝 Abstract
Exploring high-latitude lunar regions presents an extremely challenging visual environment for robots. The low sunlight elevation angle and minimal light scattering result in a visual field dominated by a high dynamic range featuring long, dynamic shadows. Reproducing these conditions on Earth requires sophisticated simulators and specialized facilities. We introduce a unique dataset recorded at the LunaLab from the SnT - University of Luxembourg, an indoor test facility designed to replicate the optical characteristics of multiple lunar latitudes. Our dataset includes images, inertial measurements, and wheel odometry data from robots navigating seven distinct trajectories under multiple illumination scenarios, simulating high-latitude lunar conditions from dawn to night time with and without the aid of headlights, resulting in 88 distinct sequences containing a total of 1.3M images. Data was captured using a stereo RGB-inertial sensor, a monocular monochrome camera, and for the first time, a novel single-photon avalanche diode (SPAD) camera. We recorded both static and dynamic image sequences, with robots navigating at slow (5 cm/s) and fast (50 cm/s) speeds. All data is calibrated, synchronized, and timestamped, providing a valuable resource for validating perception tasks from vision-based autonomous navigation to scientific imaging for future lunar missions targeting high-latitude regions or those intended for robots operating across perceptually degraded environments. The dataset can be downloaded from https://zenodo.org/records/13970078?preview=1, and a visual overview is available at https://youtu.be/d7sPeO50_2I. All supplementary material can be found at https://github.com/spaceuma/spice-hl3.