Multi-view 3D surface reconstruction from SAR images by inverse rendering

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses multi-view 3D surface reconstruction from unconstrained synthetic aperture radar (SAR) imagery—bypassing conventional reliance on interferometric SAR (InSAR) and stringent imaging geometry constraints. Method: We propose the first end-to-end inverse rendering framework for SAR, built upon: (1) a differentiable physical SAR renderer that jointly models electromagnetic scattering and geometric projection; (2) a neural implicit surface representation (MLP) coupled with a coarse-to-fine optimization strategy to jointly invert digital elevation models (DEMs) and backscattering coefficients; and (3) high-fidelity synthetic training data generated via the EMPRISE SAR simulator. Results: Evaluated on photorealistic SAR simulations, our method achieves centimeter-level geometric accuracy and physically consistent scattering reconstruction using only a few non-cooperative views. It is the first to empirically validate the sufficiency of pure geometric parallax for SAR-based 3D reconstruction, establishing a novel paradigm for multi-source remote sensing fusion.

Technology Category

Application Category

📝 Abstract
3D reconstruction of a scene from Synthetic Aperture Radar (SAR) images mainly relies on interferometric measurements, which involve strict constraints on the acquisition process. These last years, progress in deep learning has significantly advanced 3D reconstruction from multiple views in optical imaging, mainly through reconstruction-by-synthesis approaches pioneered by Neural Radiance Fields. In this paper, we propose a new inverse rendering method for 3D reconstruction from unconstrained SAR images, drawing inspiration from optical approaches. First, we introduce a new simplified differentiable SAR rendering model, able to synthesize images from a digital elevation model and a radar backscattering coefficients map. Then, we introduce a coarse-to-fine strategy to train a Multi-Layer Perceptron (MLP) to fit the height and appearance of a given radar scene from a few SAR views. Finally, we demonstrate the surface reconstruction capabilities of our method on synthetic SAR images produced by ONERA's physically-based EMPRISE simulator. Our method showcases the potential of exploiting geometric disparities in SAR images and paves the way for multi-sensor data fusion.
Problem

Research questions and friction points this paper is trying to address.

3D reconstruction from SAR images
inverse rendering method
multi-view surface reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inverse rendering for SAR images
Differentiable SAR rendering model
Coarse-to-fine MLP training strategy
E
Emile Barbier--Renard
LTCI, Télécom Paris, Institut Polytechnique de Paris, 91120 Palaiseau, France
Florence Tupin
Florence Tupin
Télécom ParisTech
image processingremote sensingsynthetic aperture radar
N
Nicolas Trouv'e
DEMR, ONERA, The French Aerospace Laboratory, 91761 Palaiseau, France
L
Loic Denis
Laboratoire Hubert Curien, UMR 5516, CNRS, Institut d’Optique Graduate School, Université de Lyon, UJM-Saint-Étienne, 42023 Saint-Etienne, France