Nebula: Enable City-Scale 3D Gaussian Splatting in Virtual Reality via Collaborative Rendering and Accelerated Stereo Rasterization

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scalability limitations of city-scale 3D Gaussian Splatting (3DGS) in VR—caused by bandwidth and computational bottlenecks—this paper proposes a cloud-edge collaborative rendering framework. Methodologically: (i) we introduce the first temporal-aware Level-of-Detail (LoD) search to reduce redundant memory accesses; (ii) we design a binocular-shared stereo rasterization scheme with bit-precision preservation to enhance computational efficiency; and (iii) we implement streaming of intermediate rendering results to avoid distortion from lossy video compression. Our contributions include a 2.7× reduction in motion-to-photon latency and a 1925% bandwidth saving over lossy video streaming. To our knowledge, this is the first work enabling low-latency, high-fidelity real-time rendering of city-scale 3DGS in VR.

Technology Category

Application Category

📝 Abstract
3D Gaussian splatting (3DGS) has drawn significant attention in the architectural community recently. However, current architectural designs often overlook the 3DGS scalability, making them fragile for extremely large-scale 3DGS. Meanwhile, the VR bandwidth requirement makes it impossible to deliver high-fidelity and smooth VR content from the cloud. We present Nebula, a coherent acceleration framework for large-scale 3DGS collaborative rendering. Instead of streaming videos, Nebula streams intermediate results after the LoD search, reducing 1925% data communication between the cloud and the client. To further enhance the motion-to-photon experience, we introduce a temporal-aware LoD search in the cloud that tames the irregular memory access and reduces redundant data access by exploiting temporal coherence across frames. On the client side, we propose a novel stereo rasterization that enables two eyes to share most computations during the stereo rendering with bit-accurate quality. With minimal hardware augmentations, Nebula achieves 2.7$ imes$ motion-to-photon speedup and reduces 1925% bandwidth over lossy video streaming.
Problem

Research questions and friction points this paper is trying to address.

Enables city-scale 3D Gaussian splatting in VR via collaborative rendering
Reduces bandwidth by streaming intermediate results instead of video
Accelerates stereo rasterization for faster motion-to-photon response
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative rendering reduces cloud-client data communication
Temporal-aware LoD search optimizes memory access across frames
Stereo rasterization shares computations between eyes for VR
🔎 Similar Papers
No similar papers found.
H
He Zhu
Shanghai Jiao Tong University
Z
Zheng Liu
Shanghai Jiao Tong University
Xingyang Li
Xingyang Li
Shanghai Jiao Tong Universty
Machine Learning Systems
A
Anbang Wu
Shanghai Jiao Tong University
Jieru Zhao
Jieru Zhao
Associate Professor, Shanghai Jiao Tong University
Hardware-software co-designAI acceleration and systemCompilerFPGAHigh-level synthesis
Fangxin Liu
Fangxin Liu
Shanghai Jiao Tong University
In-memory Computing、Brian-inspired Neuromorphic Computing
Y
Yiming Gan
Institute of Computing Technology, Chinese Academy of Sciences
Jingwen Leng
Jingwen Leng
Professor, Shanghai Jiao Tong University
Computer Architecture
Y
Yu Feng
Shanghai Jiao Tong University, Shanghai Qi Zhi Institute