Seeing through Satellite Images at Street Views

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging cross-view synthesis problem from satellite imagery to street-level 360° panoramas (including video). We propose Sat2Density++, the first NeRF-based method that explicitly incorporates street-scene-specific priors—such as sky distribution and global illumination—into the neural radiance field framework, enabling explicit viewpoint disentanglement and joint geometry-appearance optimization. By integrating these priors, our approach effectively mitigates radiance field misalignment caused by extreme viewpoint discrepancies and sparse input observations. Evaluated on urban and suburban datasets, Sat2Density++ generates high-fidelity, multi-view-consistent 360° street-level images and videos. Compared to existing cross-view synthesis methods, it achieves significant improvements in rendering quality and satellite-image fidelity. This work establishes a novel paradigm for synergistic modeling of remote sensing and street-level imagery.

Technology Category

Application Category

📝 Abstract
This paper studies the task of SatStreet-view synthesis, which aims to render photorealistic street-view panorama images and videos given any satellite image and specified camera positions or trajectories. We formulate to learn neural radiance field from paired images captured from satellite and street viewpoints, which comes to be a challenging learning problem due to the sparse-view natural and the extremely-large viewpoint changes between satellite and street-view images. We tackle the challenges based on a task-specific observation that street-view specific elements, including the sky and illumination effects are only visible in street-view panoramas, and present a novel approach Sat2Density++ to accomplish the goal of photo-realistic street-view panoramas rendering by modeling these street-view specific in neural networks. In the experiments, our method is testified on both urban and suburban scene datasets, demonstrating that Sat2Density++ is capable of rendering photorealistic street-view panoramas that are consistent across multiple views and faithful to the satellite image.
Problem

Research questions and friction points this paper is trying to address.

Render street-view panoramas from satellite images
Overcome large viewpoint changes between satellite and street views
Model sky and illumination effects for realistic rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural radiance field for satellite-street view synthesis
Modeling sky and illumination effects in neural networks
Sat2Density++ for photorealistic street-view panorama rendering
🔎 Similar Papers
No similar papers found.
M
Ming Qian
State Key Lab. LIESMARS, Wuhan University, Wuhan, 430079, China
Bin Tan
Bin Tan
Ph.D Student, Wuhan University
Computer Vision
Q
Qiuyu Wang
Ant Group
X
Xianwei Zheng
State Key Lab. LIESMARS, Wuhan University, Wuhan, 430079, China
H
Hanjiang Xiong
State Key Lab. LIESMARS, Wuhan University, Wuhan, 430079, China
Gui-Song Xia
Gui-Song Xia
School of Artificial Intelligence, Wuhan University, China
Artificial IntelligenceComputer VisionPhotogrammetryRemote SensingRobotics
Yujun Shen
Yujun Shen
Ant Group
Generative ModelingComputer VisionDeep Learning
N
Nan Xue
Ant Group