๐ค AI Summary
This work addresses the end-to-end controllable conversion of monocular video to stereo video, circumventing artifacts from explicit depth estimation and image warping. We propose a conditional latent diffusion model framework centered on a guidance-aware VAE decoder: it models disparity consistency directly in the latent space, ensuring geometric fidelity and sharpness; notably, it enables real-time stereo strength (i.e., disparity range) control via a single scalar parameter at inference timeโa first in stereo video generation. Our method bypasses depth prediction and post-hoc warping, directly synthesizing high-fidelity stereo video. Evaluated on three real-world stereo video benchmarks, it significantly outperforms conventional depth-then-warping approaches and state-of-the-art warping-free baselines, achieving new SOTA performance in both visual quality and disparity consistency.
๐ Abstract
The growing demand for immersive 3D content calls for automated monocular-to-stereo video conversion. We present Elastic3D, a controllable, direct end-to-end method for upgrading a conventional video to a binocular one. Our approach, based on (conditional) latent diffusion, avoids artifacts due to explicit depth estimation and warping. The key to its high-quality stereo video output is a novel, guided VAE decoder that ensures sharp and epipolar-consistent stereo video output. Moreover, our method gives the user control over the strength of the stereo effect (more precisely, the disparity range) at inference time, via an intuitive, scalar tuning knob. Experiments on three different datasets of real-world stereo videos show that our method outperforms both traditional warping-based and recent warping-free baselines and sets a new standard for reliable, controllable stereo video conversion. Please check the project page for the video samples https://elastic3d.github.io.