🤖 AI Summary
This work addresses the challenge of achieving photorealistic, low-latency 3D video conferencing using only a monocular 2D webcam. We propose the first end-to-end real-time monocular 3D Gaussian reconstruction framework. Methodologically, we design a conditional per-frame 3D Gaussian prediction network, incorporating a temporal stability loss and differentiable rendering to explicitly enforce dynamic consistency and view fidelity—without relying on specialized hardware, pretrained generative models, or fixed appearance priors. Our core contribution is the formal definition and joint optimization of “authenticity,” a holistic metric balancing geometric plausibility, appearance realism, and temporal coherence. Operating solely on standard RGB input (no depth or multi-view data), our system achieves <30 ms end-to-end latency and attains state-of-the-art visual quality and motion stability. To our knowledge, this is the first method enabling real-time, high-fidelity, view-consistent 3D video conferencing on lightweight consumer devices.
📝 Abstract
Virtual 3D meetings offer the potential to enhance copresence, increase engagement and thus improve effectiveness of remote meetings compared to standard 2D video calls. However, representing people in 3D meetings remains a challenge; existing solutions achieve high quality by using complex hardware, making use of fixed appearance via enrolment, or by inverting a pre-trained generative model. These approaches lead to constraints that are unwelcome and ill-fitting for videoconferencing applications. We present the first method to predict 3D Gaussian reconstructions in real time from a single 2D webcam feed, where the 3D representation is not only live and realistic, but also authentic to the input video. By conditioning the 3D representation on each video frame independently, our reconstruction faithfully recreates the input video from the captured viewpoint (a property we call authenticity), while generalizing realistically to novel viewpoints. Additionally, we introduce a stability loss to obtain reconstructions that are temporally stable on video sequences. We show that our method delivers state-of-the-art accuracy in visual quality and stability metrics compared to existing methods, and demonstrate our approach in live one-to-one 3D meetings using only a standard 2D camera and display. This demonstrates that our approach can allow anyone to communicate volumetrically, via a method for 3D videoconferencing that is not only highly accessible, but also realistic and authentic.