TraGraph-GS: Trajectory Graph-based Gaussian Splatting for Arbitrary Large-Scale Scene Rendering

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two key challenges in large-scale novel view synthesis—rigid spatial partitioning failing to adapt to arbitrary camera trajectories, and region fusion causing Gaussian overlap and texture distortion—this paper proposes a trajectory-guided, spatially adaptive Gaussian splatting method. Our approach features: (1) a camera-trajectory graph structure enabling dynamic, semantic-aware spatial partitioning; (2) graph-regularized optimization that jointly enforces long-range geometric consistency and local texture fidelity; and (3) a progressive rendering strategy that explicitly suppresses Gaussian overlap artifacts. Evaluated on four aerial and four ground-level large-scale datasets, our method achieves average PSNR gains of 1.86 dB and 1.62 dB, respectively, significantly outperforming state-of-the-art methods. It delivers superior accuracy, strong generalization across diverse trajectory patterns, and computational efficiency.

Technology Category

Application Category

📝 Abstract
High-quality novel view synthesis for large-scale scenes presents a challenging dilemma in 3D computer vision. Existing methods typically partition large scenes into multiple regions, reconstruct a 3D representation using Gaussian splatting for each region, and eventually merge them for novel view rendering. They can accurately render specific scenes, yet they do not generalize effectively for two reasons: (1) rigid spatial partition techniques struggle with arbitrary camera trajectories, and (2) the merging of regions results in Gaussian overlap to distort texture details. To address these challenges, we propose TraGraph-GS, leveraging a trajectory graph to enable high-precision rendering for arbitrarily large-scale scenes. We present a spatial partitioning method for large-scale scenes based on graphs, which incorporates a regularization constraint to enhance the rendering of textures and distant objects, as well as a progressive rendering strategy to mitigate artifacts caused by Gaussian overlap. Experimental results demonstrate its superior performance both on four aerial and four ground datasets and highlight its remarkable efficiency: our method achieves an average improvement of 1.86 dB in PSNR on aerial datasets and 1.62 dB on ground datasets compared to state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Rendering large-scale scenes with arbitrary camera trajectories
Avoiding Gaussian overlap distortion in merged 3D representations
Improving texture and distant object rendering accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trajectory graph-based spatial partitioning method
Regularization constraint enhances texture rendering
Progressive rendering reduces Gaussian overlap artifacts
🔎 Similar Papers
No similar papers found.
X
Xiaohan Zhang
School of Future Technology, South China University of Technology, Guangzhou 511442, China
Sitong Wang
Sitong Wang
Columbia University
Human-Computer InteractionCreativity Support ToolsGenerative AI
Y
Yushen Yan
School of Future Technology, South China University of Technology, Guangzhou 511442, China
Y
Yi Yang
School of Future Technology, South China University of Technology, Guangzhou 511442, China
M
Mingda Xu
School of Future Technology, South China University of Technology, Guangzhou 511442, China
Q
Qi Liu
School of Future Technology, South China University of Technology, Guangzhou 511442, China