🤖 AI Summary
Existing Gaussian splatting methods suffer from degraded reconstruction quality and temporal inconsistency under long-duration sequences, large motions, and occlusion scenarios, while also lacking compatibility with standard video codecs. This work proposes a unified representation that maps 4D Gaussian attributes onto structured multi-scale UV atlases, enabling direct optimization of Gaussian parameters in the UV domain to achieve temporally coherent fitting. The approach innovatively incorporates optical flow–guided dynamic Gaussian identification and a keyframe mechanism to handle complex motion and occlusions, and for the first time enables an efficient, streaming-compatible 4D volumetric video representation compatible with standard video encoders. Evaluated on the newly introduced PackUV-2B dataset—comprising 2 billion frames—the method supports high-fidelity, stable reconstructions of sequences up to 30 minutes long, significantly outperforming existing approaches in rendering fidelity.
📝 Abstract
Volumetric videos offer immersive 4D experiences, but remain difficult to reconstruct, store, and stream at scale. Existing Gaussian Splatting based methods achieve high-quality reconstruction but break down on long sequences, temporal inconsistency, and fail under large motions and disocclusions. Moreover, their outputs are typically incompatible with conventional video coding pipelines, preventing practical applications.
We introduce PackUV, a novel 4D Gaussian representation that maps all Gaussian attributes into a sequence of structured, multi-scale UV atlas, enabling compact, image-native storage. To fit this representation from multi-view videos, we propose PackUV-GS, a temporally consistent fitting method that directly optimizes Gaussian parameters in the UV domain. A flow-guided Gaussian labeling and video keyframing module identifies dynamic Gaussians, stabilizes static regions, and preserves temporal coherence even under large motions and disocclusions. The resulting UV atlas format is the first unified volumetric video representation compatible with standard video codecs (e.g., FFV1) without losing quality, enabling efficient streaming within existing multimedia infrastructure.
To evaluate long-duration volumetric capture, we present PackUV-2B, the largest multi-view video dataset to date, featuring more than 50 synchronized cameras, substantial motion, and frequent disocclusions across 100 sequences and 2B (billion) frames. Extensive experiments demonstrate that our method surpasses existing baselines in rendering fidelity while scaling to sequences up to 30 minutes with consistent quality.