🤖 AI Summary
Existing methods for novel view synthesis from large-scale unstructured image sequences suffer from high computational latency, memory bottlenecks, and SLAM failure under wide-baseline or large-scale scenarios—particularly during camera pose estimation and 3D Gaussian optimization. This paper introduces the first real-time, online dynamic Gaussian radiance field framework that enables simultaneous capture and reconstruction. Our method jointly optimizes camera poses and the radiance field via learning-based fast initial pose estimation and a GPU-efficient miniature bundle adjustment. We propose a novel dynamic incremental Gaussian primitive generation scheme coupled with anchor-point clustering and offloading to alleviate memory and computational constraints. Furthermore, direct Gaussian sampling and progressive anchor storage enhance rendering efficiency. Evaluated across diverse datasets, our approach achieves “reconstruction-on-the-fly,” significantly outperforming offline methods in processing speed while matching state-of-the-art rendering quality—fully supporting dense, wide-baseline, and ultra-large-scale scenes.
📝 Abstract
Radiance field methods such as 3D Gaussian Splatting (3DGS) allow easy reconstruction from photos, enabling free-viewpoint navigation. Nonetheless, pose estimation using Structure from Motion and 3DGS optimization can still each take between minutes and hours of computation after capture is complete. SLAM methods combined with 3DGS are fast but struggle with wide camera baselines and large scenes. We present an on-the-fly method to produce camera poses and a trained 3DGS immediately after capture. Our method can handle dense and wide-baseline captures of ordered photo sequences and large-scale scenes. To do this, we first introduce fast initial pose estimation, exploiting learned features and a GPU-friendly mini bundle adjustment. We then introduce direct sampling of Gaussian primitive positions and shapes, incrementally spawning primitives where required, significantly accelerating training. These two efficient steps allow fast and robust joint optimization of poses and Gaussian primitives. Our incremental approach handles large-scale scenes by introducing scalable radiance field construction, progressively clustering 3DGS primitives, storing them in anchors, and offloading them from the GPU. Clustered primitives are progressively merged, keeping the required scale of 3DGS at any viewpoint. We evaluate our solution on a variety of datasets and show that our solution can provide on-the-fly processing of all the capture scenarios and scene sizes we target while remaining competitive with other methods that only handle specific capture styles or scene sizes in speed, image quality, or both.