VoluMe -- Authentic 3D Video Calls from Live Gaussian Splat Prediction

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving photorealistic, low-latency 3D video conferencing using only a monocular 2D webcam. We propose the first end-to-end real-time monocular 3D Gaussian reconstruction framework. Methodologically, we design a conditional per-frame 3D Gaussian prediction network, incorporating a temporal stability loss and differentiable rendering to explicitly enforce dynamic consistency and view fidelity—without relying on specialized hardware, pretrained generative models, or fixed appearance priors. Our core contribution is the formal definition and joint optimization of “authenticity,” a holistic metric balancing geometric plausibility, appearance realism, and temporal coherence. Operating solely on standard RGB input (no depth or multi-view data), our system achieves <30 ms end-to-end latency and attains state-of-the-art visual quality and motion stability. To our knowledge, this is the first method enabling real-time, high-fidelity, view-consistent 3D video conferencing on lightweight consumer devices.

Technology Category

Application Category

📝 Abstract
Virtual 3D meetings offer the potential to enhance copresence, increase engagement and thus improve effectiveness of remote meetings compared to standard 2D video calls. However, representing people in 3D meetings remains a challenge; existing solutions achieve high quality by using complex hardware, making use of fixed appearance via enrolment, or by inverting a pre-trained generative model. These approaches lead to constraints that are unwelcome and ill-fitting for videoconferencing applications. We present the first method to predict 3D Gaussian reconstructions in real time from a single 2D webcam feed, where the 3D representation is not only live and realistic, but also authentic to the input video. By conditioning the 3D representation on each video frame independently, our reconstruction faithfully recreates the input video from the captured viewpoint (a property we call authenticity), while generalizing realistically to novel viewpoints. Additionally, we introduce a stability loss to obtain reconstructions that are temporally stable on video sequences. We show that our method delivers state-of-the-art accuracy in visual quality and stability metrics compared to existing methods, and demonstrate our approach in live one-to-one 3D meetings using only a standard 2D camera and display. This demonstrates that our approach can allow anyone to communicate volumetrically, via a method for 3D videoconferencing that is not only highly accessible, but also realistic and authentic.
Problem

Research questions and friction points this paper is trying to address.

Real-time 3D Gaussian reconstruction from 2D webcam feed
Authentic and realistic 3D representation for videoconferencing
Achieving high-quality 3D meetings without complex hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time 3D Gaussian prediction from 2D webcam
Frame-independent 3D reconstruction for authenticity
Stability loss for temporal coherence in video
🔎 Similar Papers
No similar papers found.
M
Martin de La Gorce
Microsoft, Cambridge, UK
Charlie Hewitt
Charlie Hewitt
Google
Computer ScienceComputer VisionGraphicsHCIMixed Reality
T
Tibor Takacs
Microsoft, Cambridge, UK
R
Robert Gerdisch
Microsoft, Cambridge, UK
Z
Zafiirah Hosenie
Microsoft, Cambridge, UK
G
Givi Meishvili
Microsoft, Cambridge, UK
Marek Kowalski
Marek Kowalski
Scientist, Microsoft
computer visionimage processingpattern recognition
T
Thomas J. Cashman
Microsoft, Cambridge, UK
Antonio Criminisi
Antonio Criminisi
Partner Research Lead at Microsoft Corporation
Augmented RealityComputer VisionMachine LearningMedical Image AnalysisRandom Forests