🤖 AI Summary
This work addresses the degradation in synthesis quality for out-of-trajectory novel view synthesis in autonomous driving, which stems from weak geometric support and sparse supervision. To tackle this challenge, the authors propose Geo-EVS, a framework that integrates Geometry-Aware Reprojection (GAR) with an Artifact-Guided Latent Diffusion model (AGLD) under unified geometric constraints to generate high-fidelity novel views. The method innovatively leverages reprojection artifact masks to guide structural recovery and incorporates a fine-tuned VGGT network for colored point cloud reconstruction. Evaluated on the Waymo dataset, Geo-EVS significantly improves synthesis quality and geometric accuracy under sparse viewpoints—particularly in large-baseline and low-coverage scenarios—and effectively enhances downstream 3D object detection performance.
📝 Abstract
Extrapolative novel view synthesis can reduce camera-rig dependency in autonomous driving by generating standardized virtual views from heterogeneous sensors. Existing methods degrade outside recorded trajectories because extrapolated poses provide weak geometric support and no dense target-view supervision. The key is to explicitly expose the model to out-of-trajectory condition defects during training. We propose Geo-EVS, a geometry-conditioned framework under sparse supervision. Geo-EVS has two components. Geometry-Aware Reprojection (GAR) uses fine-tuned VGGT to reconstruct colored point clouds and reproject them to observed and virtual target poses, producing geometric condition maps. This design unifies the reprojection path between training and inference. Artifact-Guided Latent Diffusion (AGLD) injects reprojection-derived artifact masks during training so the model learns to recover structure under missing support. For evaluation, we use a LiDAR-Projected Sparse-Reference (LPSR) protocol when dense extrapolated-view ground truth is unavailable. On Waymo, Geo-EVS improves sparse-view synthesis quality and geometric accuracy, especially in high-angle and low-coverage settings. It also improves downstream 3D detection.