GS-RoadPatching: Inpainting Gaussians via 3D Searching and Placing for Driving Scenes

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) inpainting methods rely on 2D generative models, struggle to preserve cross-modal spatiotemporal consistency, and require time-consuming Gaussian parameter retraining. To address these limitations, this paper proposes the first search-and-place inpainting framework operating entirely within 3D space. Leveraging structural redundancy inherent in driving scenes, our method extracts multi-scale local contextual features from a complete 3DGS reconstruction, performs structured 3D spatial search to identify geometrically and semantically similar patches, and then replaces and optimizes them via fusion. This enables end-to-end 3D content completion without invoking 2D diffusion or GAN models, nor retraining Gaussian parameters. Evaluated on multiple autonomous driving datasets, our approach achieves state-of-the-art performance, significantly improving inpainting accuracy and cross-modal compatibility, while demonstrating strong generalization to diverse real-world scenes.

Technology Category

Application Category

📝 Abstract
This paper presents GS-RoadPatching, an inpainting method for driving scene completion by referring to completely reconstructed regions, which are represented by 3D Gaussian Splatting (3DGS). Unlike existing 3DGS inpainting methods that perform generative completion relying on 2D perspective-view-based diffusion or GAN models to predict limited appearance or depth cues for missing regions, our approach enables substitutional scene inpainting and editing directly through the 3DGS modality, extricating it from requiring spatial-temporal consistency of 2D cross-modals and eliminating the need for time-intensive retraining of Gaussians. Our key insight is that the highly repetitive patterns in driving scenes often share multi-modal similarities within the implicit 3DGS feature space and are particularly suitable for structural matching to enable effective 3DGS-based substitutional inpainting. Practically, we construct feature-embedded 3DGS scenes to incorporate a patch measurement method for abstracting local context at different scales and, subsequently, propose a structural search method to find candidate patches in 3D space effectively. Finally, we propose a simple yet effective substitution-and-fusion optimization for better visual harmony. We conduct extensive experiments on multiple publicly available datasets to demonstrate the effectiveness and efficiency of our proposed method in driving scenes, and the results validate that our method achieves state-of-the-art performance compared to the baseline methods in terms of both quality and interoperability. Additional experiments in general scenes also demonstrate the applicability of the proposed 3D inpainting strategy. The project page and code are available at: https://shanzhaguoo.github.io/GS-RoadPatching/
Problem

Research questions and friction points this paper is trying to address.

Completing missing regions in driving scenes using 3D Gaussian Splatting representation
Enabling substitutional scene inpainting directly through 3DGS modality without 2D dependencies
Finding structurally similar patches in 3D space for effective driving scene completion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 3D Gaussian Splatting for substitutional scene inpainting
Proposes structural search method to find candidate 3D patches
Applies substitution-and-fusion optimization for visual harmony
🔎 Similar Papers
No similar papers found.
G
Guo Chen
Beijing Normal University, China
J
Jiarun Liu
Unmanned Vehicle Dept., Cainiao, Alibaba & State Key Lab of CAD&CG, Zhejiang University, China
S
Sicong Du
Unmanned Vehicle Dept., Cainiao, Alibaba, China
Chenming Wu
Chenming Wu
Researcher, Baidu Inc.
RoboticsGraphics3D VisionComputational Design
D
Deqi Li
Beijing Normal University, China
Shi-Sheng Huang
Shi-Sheng Huang
Associate Professor, Beijing Normal University
Online 3D ReconstructionDynamic View SynthesisvSLAM
G
Guofeng Zhang
State Key Lab of CAD&CG, Zhejiang University, China
S
Sheng Yang
Unmanned Vehicle Dept., Cainiao, Alibaba, China