Compensating Spatiotemporally Inconsistent Observations for Online Dynamic 3D Gaussian Splatting

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address temporal flickering and artifacts in online dynamic 3D Gaussian splatting reconstruction caused by sensor noise and observation inconsistency, this paper proposes the first observation error decoupling and compensation framework. It models input observations as the sum of an ideal scene signal and a learnable error term, compensated in real time via a lightweight error modeling network. Integrated into the online optimization pipeline of dynamic Gaussian splatting and augmented with spatiotemporal regularization losses, the method enables incremental reconstruction from streaming video. Evaluated on multiple dynamic datasets, it achieves a 1.8 dB PSNR gain, a 0.04 reduction in LPIPS, and a 62% decrease in temporal jitter over static regions—demonstrating substantial improvements in both rendering quality and temporal consistency over state-of-the-art online methods.

Technology Category

Application Category

📝 Abstract
Online reconstruction of dynamic scenes is significant as it enables learning scenes from live-streaming video inputs, while existing offline dynamic reconstruction methods rely on recorded video inputs. However, previous online reconstruction approaches have primarily focused on efficiency and rendering quality, overlooking the temporal consistency of their results, which often contain noticeable artifacts in static regions. This paper identifies that errors such as noise in real-world recordings affect temporal inconsistency in online reconstruction. We propose a method that enhances temporal consistency in online reconstruction from observations with temporal inconsistency which is inevitable in cameras. We show that our method restores the ideal observation by subtracting the learned error. We demonstrate that applying our method to various baselines significantly enhances both temporal consistency and rendering quality across datasets. Code, video results, and checkpoints are available at https://bbangsik13.github.io/OR2.
Problem

Research questions and friction points this paper is trying to address.

Enhances temporal consistency in online dynamic scene reconstruction
Addresses noise-induced errors in real-world video recordings
Improves rendering quality by learning and subtracting observation errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances temporal consistency in online reconstruction
Subtracts learned error to restore ideal observation
Improves both temporal consistency and rendering quality
🔎 Similar Papers
No similar papers found.
Y
Youngsik Yun
Yonsei University, Republic of Korea
Jeongmin Bae
Jeongmin Bae
a graduate student at Yonsei University
Deep LearningGenerative models3D
H
H. Son
Yonsei University, Republic of Korea
S
Seoha Kim
Electronics and Telecommunications Research Institute, Republic of Korea
H
Hahyun Lee
Electronics and Telecommunications Research Institute, Republic of Korea
G
G. Bang
Electronics and Telecommunications Research Institute, Republic of Korea
Youngjung Uh
Youngjung Uh
Yonsei University
Generative models