🤖 AI Summary
Existing lunar visual datasets lack joint geometric and photometric supervision, illumination diversity, and large-scale coverage, hindering the development of learning-based perception systems for lunar exploration. This work proposes MoonAnything—the first unified lunar visual benchmark integrating real terrain data with physically based rendering—comprising two subsets: LunarGeo, featuring stereo images with dense depth and calibrated camera parameters, and LunarPhoto, offering multi-illumination photorealistic images rendered using spatially varying BRDFs. Together, they provide over 130,000 samples with comprehensive supervision signals. For the first time, this benchmark delivers synchronized geometric and photometric annotations at scale under realistic solar illumination configurations, establishing a unique testbed for low-texture, high-contrast extraterrestrial scenes. The project releases the full dataset, generation tools, and multiple state-of-the-art baselines to advance vision-based perception research on airless celestial bodies.
📝 Abstract
Accurate perception of lunar surfaces is critical for modern lunar exploration missions. However, developing robust learning-based perception systems is hindered by the lack of datasets that provide both geometric and photometric supervision. Existing lunar datasets typically lack either geometric ground truth, photometric realism, illumination diversity, or large-scale coverage. In this paper, we introduce MoonAnything, a unified benchmark built on real lunar topography with physically-based rendering, providing the first comprehensive geometric and photometric supervision under diverse illumination with large scale. The benchmark comprises two complementary sub-datasets : i) LunarGeo provides stereo images with corresponding dense depth maps and camera calibration enabling 3D reconstruction and pose estimation; ii) LunarPhoto provides photorealistic images using a spatially-varying BRDF model, along with multi-illumination renderings under real solar configurations, enabling reflectance estimation and illumination-robust perception. Together, these datasets offer over 130K samples with comprehensive supervision. Beyond lunar applications, MoonAnything offers a unique setting and challenging testbed for algorithms under low-textured, high-contrast conditions and applies to other airless celestial bodies and could generalize beyond. We establish baselines using state-of-the-art methods and release the complete dataset along with generation tools to support community extension: https://github.com/clementinegrethen/MoonAnything.