ViVo: A Dataset for Volumetric VideoReconstruction and Compression

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural volumetric video datasets exhibit limited semantic and low-level visual diversity, hindering research on reconstruction and compression under realistic production conditions. To address this, we introduce the first high-fidelity volumetric video dataset explicitly designed for real-world production workflows—diversifying both human physiological attributes (e.g., skin, hair) and complex optical phenomena (e.g., transparency, specular reflection, liquid dynamics). The dataset provides synchronized 14-view RGB-D video (30 FPS), audio, 2D instance masks, and dense 3D point clouds, all rigorously calibrated frame-by-frame and annotated with high-accuracy 3D reconstructions. It enables joint evaluation of reconstruction and compression, and benchmarking reveals critical bottlenecks of three state-of-the-art reconstruction methods and two volumetric video compression algorithms in practical scenarios. The dataset is publicly released, establishing a new standard evaluation resource for the volumetric video community.

Technology Category

Application Category

📝 Abstract
As research on neural volumetric video reconstruction and compression flourishes, there is a need for diverse and realistic datasets, which can be used to develop and validate reconstruction and compression models. However, existing volumetric video datasets lack diverse content in terms of both semantic and low-level features that are commonly present in real-world production pipelines. In this context, we propose a new dataset, ViVo, for VolumetrIc VideO reconstruction and compression. The dataset is faithful to real-world volumetric video production and is the first dataset to extend the definition of diversity to include both human-centric characteristics (skin, hair, etc.) and dynamic visual phenomena (transparent, reflective, liquid, etc.). Each video sequence in this database contains raw data including fourteen multi-view RGB and depth video pairs, synchronized at 30FPS with per-frame calibration and audio data, and their associated 2-D foreground masks and 3-D point clouds. To demonstrate the use of this database, we have benchmarked three state-of-the-art (SotA) 3-D reconstruction methods and two volumetric video compression algorithms. The obtained results evidence the challenging nature of the proposed dataset and the limitations of existing datasets for both volumetric video reconstruction and compression tasks, highlighting the need to develop more effective algorithms for these applications. The database and the associated results are available at https://vivo-bvicr.github.io/
Problem

Research questions and friction points this paper is trying to address.

Lack diverse realistic datasets for volumetric video reconstruction compression
Existing datasets miss semantic low-level features real-world production
Need effective algorithms handle human-centric dynamic visual phenomena
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse dataset for volumetric video reconstruction
Includes human-centric and dynamic visual features
Benchmarked with SotA reconstruction and compression methods
🔎 Similar Papers
No similar papers found.
Adrian Azzarelli
Adrian Azzarelli
University of Bristol
video3-D capturecinematographyai
G
Ge Gao
Visual Information Lab, University of Bristol, Bristol, BS1 5DD, U.K.
Ho Man Kwan
Ho Man Kwan
University of Bristol
Deep LearningVideo Compression
F
Fan Zhang
Visual Information Lab, University of Bristol, Bristol, BS1 5DD, U.K.
N
N. Anantrasirichai
Visual Information Lab, University of Bristol, Bristol, BS1 5DD, U.K.
O
Oliver Moolan-Feroze
Condense Reality Ltd, 1 Canon’s Road, Bristol, BS1 5TX, U.K.
D
David R. Bull
Visual Information Lab, University of Bristol, Bristol, BS1 5DD, U.K.