🤖 AI Summary
Existing lunar rendering approaches often rely on simplified or spatially uniform BRDF models, which fail to accurately capture the local reflectance properties of lunar regolith, thereby limiting high-fidelity rendering and visual navigation. This work proposes a geometry-to-reflectance learning framework that directly predicts spatially varying BRDF parameters from lunar digital elevation models (DEMs), requiring only a single-view image and known illumination–viewing geometry—without multi-view data or specialized hardware. Built upon a U-Net architecture, the method integrates differentiable rendering with a physically based lighting model and optimizes photometric consistency between real and synthetic images in an end-to-end manner. Evaluated on the Tycho crater region, it reduces photometric error by 38% compared to state-of-the-art baselines and achieves significant improvements in PSNR, SSIM, and perceptual similarity, marking the first demonstration of inferring spatially varying lunar surface reflectance solely from terrain geometry.
📝 Abstract
We address the problem of estimating realistic, spatially varying reflectance for complex planetary surfaces such as the lunar regolith, which is critical for high-fidelity rendering and vision-based navigation. Existing lunar rendering pipelines rely on simplified or spatially uniform BRDF models whose parameters are difficult to estimate and fail to capture local reflectance variations, limiting photometric realism. We propose Lunar-G2R, a geometry-to-reflectance learning framework that predicts spatially varying BRDF parameters directly from a lunar digital elevation model (DEM), without requiring multi-view imagery, controlled illumination, or dedicated reflectance-capture hardware at inference time. The method leverages a U-Net trained with differentiable rendering to minimize photometric discrepancies between real orbital images and physically based renderings under known viewing and illumination geometry. Experiments on a geographically held-out region of the Tycho crater show that our approach reduces photometric error by 38 % compared to a state-of-the-art baseline, while achieving higher PSNR and SSIM and improved perceptual similarity, capturing fine-scale reflectance variations absent from spatially uniform models. To our knowledge, this is the first method to infer a spatially varying reflectance model directly from terrain geometry.