MoonAnything: A Vision Benchmark with Large-Scale Lunar Supervised Data

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing lunar visual datasets lack joint geometric and photometric supervision, illumination diversity, and large-scale coverage, hindering the development of learning-based perception systems for lunar exploration. This work proposes MoonAnything—the first unified lunar visual benchmark integrating real terrain data with physically based rendering—comprising two subsets: LunarGeo, featuring stereo images with dense depth and calibrated camera parameters, and LunarPhoto, offering multi-illumination photorealistic images rendered using spatially varying BRDFs. Together, they provide over 130,000 samples with comprehensive supervision signals. For the first time, this benchmark delivers synchronized geometric and photometric annotations at scale under realistic solar illumination configurations, establishing a unique testbed for low-texture, high-contrast extraterrestrial scenes. The project releases the full dataset, generation tools, and multiple state-of-the-art baselines to advance vision-based perception research on airless celestial bodies.
📝 Abstract
Accurate perception of lunar surfaces is critical for modern lunar exploration missions. However, developing robust learning-based perception systems is hindered by the lack of datasets that provide both geometric and photometric supervision. Existing lunar datasets typically lack either geometric ground truth, photometric realism, illumination diversity, or large-scale coverage. In this paper, we introduce MoonAnything, a unified benchmark built on real lunar topography with physically-based rendering, providing the first comprehensive geometric and photometric supervision under diverse illumination with large scale. The benchmark comprises two complementary sub-datasets : i) LunarGeo provides stereo images with corresponding dense depth maps and camera calibration enabling 3D reconstruction and pose estimation; ii) LunarPhoto provides photorealistic images using a spatially-varying BRDF model, along with multi-illumination renderings under real solar configurations, enabling reflectance estimation and illumination-robust perception. Together, these datasets offer over 130K samples with comprehensive supervision. Beyond lunar applications, MoonAnything offers a unique setting and challenging testbed for algorithms under low-textured, high-contrast conditions and applies to other airless celestial bodies and could generalize beyond. We establish baselines using state-of-the-art methods and release the complete dataset along with generation tools to support community extension: https://github.com/clementinegrethen/MoonAnything.
Problem

Research questions and friction points this paper is trying to address.

lunar perception
geometric supervision
photometric supervision
dataset scarcity
illumination diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

lunar perception
geometric supervision
photometric realism
physically-based rendering
BRDF modeling
🔎 Similar Papers
No similar papers found.
C
Clémentine Grethen
IRIT - Université de Toulouse, France
Yuang Shi
Yuang Shi
Ph.D. Candidate, National University of Singapore
Multimedia Systems3D Media Streaming
S
Simone Gasparini
IRIT - Université de Toulouse, France; IPAL, IRL2955, Singapore
Géraldine Morin
Géraldine Morin
Professor
Geometric modeling3DMultimedia