StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular-to-stereoscopic video generation faces challenges including high computational cost, severe visual artifacts, and inconsistent 3D geometry—particularly problematic for XR applications requiring natural inter-pupillary distance (IPD) alignment and high-fidelity stereo output. Method: We propose an end-to-end, stereo-supervision-free generative framework built upon a pre-trained video diffusion model. Our approach introduces a novel geometry-aware regularization mechanism and integrates spatio-temporal tiling synthesis to jointly leverage monocular video conditioning and implicit geometric constraints. Contribution/Results: Leveraging a large-scale, self-collected dataset of 11 million HD stereo video frames, our method achieves high-resolution stereoscopic video generation with strong geometric consistency. Quantitative and qualitative evaluations demonstrate significant improvements over state-of-the-art methods in both perceptual quality and depth consistency, validating its suitability for real-world XR deployment.

Technology Category

Application Category

📝 Abstract
The growing adoption of XR devices has fueled strong demand for high-quality stereo video, yet its production remains costly and artifact-prone. To address this challenge, we present StereoWorld, an end-to-end framework that repurposes a pretrained video generator for high-fidelity monocular-to-stereo video generation. Our framework jointly conditions the model on the monocular video input while explicitly supervising the generation with a geometry-aware regularization to ensure 3D structural fidelity. A spatio-temporal tiling scheme is further integrated to enable efficient, high-resolution synthesis. To enable large-scale training and evaluation, we curate a high-definition stereo video dataset containing over 11M frames aligned to natural human interpupillary distance (IPD). Extensive experiments demonstrate that StereoWorld substantially outperforms prior methods, generating stereo videos with superior visual fidelity and geometric consistency. The project webpage is available at https://ke-xing.github.io/StereoWorld/.
Problem

Research questions and friction points this paper is trying to address.

Generates high-fidelity stereo video from monocular input
Ensures 3D structural fidelity with geometry-aware regularization
Enables efficient high-resolution synthesis via spatio-temporal tiling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Repurposes pretrained video generator for stereo conversion
Uses geometry-aware regularization for 3D structural fidelity
Integrates spatio-temporal tiling for high-resolution synthesis
🔎 Similar Papers
No similar papers found.
K
Ke Xing
Beijing Jiaotong University
L
Longfei Li
Beijing Jiaotong University
Yuyang Yin
Yuyang Yin
Beijing Jiaotong University
Computer VisionAIGC
Hanwen Liang
Hanwen Liang
University of Toronto
G
Guixun Luo
Beijing Jiaotong University
Chen Fang
Chen Fang
Research Scientist@Adobe Research
Computer VisionMachine Learning
J
Jue Wang
Dzine AI
K
Konstantinos N. Plataniotis
University of Toronto
X
Xiaojie Jin
Beijing Jiaotong University
Y
Yao Zhao
Beijing Jiaotong University
Yunchao Wei
Yunchao Wei
Professor, Beijing Jiaotong University, UTS, UIUC, NUS
Computer VisionMachine Learning