🤖 AI Summary
Existing video-to-video (V2V) translation methods support only single-view inputs, making it challenging to ensure cross-view appearance consistency in multi-view synchronous capture scenarios such as embodied intelligence. This work proposes the first multimodal, multi-view V2V translation framework that achieves physically and stylistically consistent multi-view video re-simulation. By constructing a 4D shared latent space grounded in the Pi3 spatial foundation model, employing a staged diffusion training strategy, and introducing an autoregressive novel-view synthesis mechanism, our method supports wide baselines, dynamic camera motions, and heterogeneous camera configurations. It matches or exceeds state-of-the-art performance on single-view benchmarks and, for the first time, enables high-quality, cross-view consistent video generation in embodied intelligence tasks.
📝 Abstract
Recent progress in video-to-video (V2V) translation has enabled realistic resimulation of embodied AI demonstrations, a capability that allows pretrained robot policies to be transferable to new environments without additional data collection. However, prior works can only operate on a single view at a time, while embodied AI tasks are commonly captured from multiple synchronized cameras to support policy learning. Naively applying single-view models independently to each camera leads to inconsistent appearance across views, and standard transformer architectures do not scale to multi-view settings due to the quadratic cost of cross-view attention.
We present VideoWeaver, the first multimodal multi-view V2V translation framework. VideoWeaver is initially trained as a single-view flow-based V2V model. To achieve an extension to the multi-view regime, we propose to ground all views in a shared 4D latent space derived from a feed-forward spatial foundation model, namely, Pi3. This encourages view-consistent appearance even under wide baselines and dynamic camera motion. To scale beyond a fixed number of cameras, we train views at distinct diffusion timesteps, enabling the model to learn both joint and conditional view distributions. This in turn allows autoregressive synthesis of new viewpoints conditioned on existing ones.
Experiments show superior or similar performance to the state-of-the-art on the single-view translation benchmarks and, for the first time, physically and stylistically consistent multi-view translations, including challenging egocentric and heterogeneous-camera setups central to world randomization for robot learning.