VideoWeaver: Multimodal Multi-View Video-to-Video Transfer for Embodied Agents

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-to-video (V2V) translation methods support only single-view inputs, making it challenging to ensure cross-view appearance consistency in multi-view synchronous capture scenarios such as embodied intelligence. This work proposes the first multimodal, multi-view V2V translation framework that achieves physically and stylistically consistent multi-view video re-simulation. By constructing a 4D shared latent space grounded in the Pi3 spatial foundation model, employing a staged diffusion training strategy, and introducing an autoregressive novel-view synthesis mechanism, our method supports wide baselines, dynamic camera motions, and heterogeneous camera configurations. It matches or exceeds state-of-the-art performance on single-view benchmarks and, for the first time, enables high-quality, cross-view consistent video generation in embodied intelligence tasks.

Technology Category

Application Category

📝 Abstract
Recent progress in video-to-video (V2V) translation has enabled realistic resimulation of embodied AI demonstrations, a capability that allows pretrained robot policies to be transferable to new environments without additional data collection. However, prior works can only operate on a single view at a time, while embodied AI tasks are commonly captured from multiple synchronized cameras to support policy learning. Naively applying single-view models independently to each camera leads to inconsistent appearance across views, and standard transformer architectures do not scale to multi-view settings due to the quadratic cost of cross-view attention. We present VideoWeaver, the first multimodal multi-view V2V translation framework. VideoWeaver is initially trained as a single-view flow-based V2V model. To achieve an extension to the multi-view regime, we propose to ground all views in a shared 4D latent space derived from a feed-forward spatial foundation model, namely, Pi3. This encourages view-consistent appearance even under wide baselines and dynamic camera motion. To scale beyond a fixed number of cameras, we train views at distinct diffusion timesteps, enabling the model to learn both joint and conditional view distributions. This in turn allows autoregressive synthesis of new viewpoints conditioned on existing ones. Experiments show superior or similar performance to the state-of-the-art on the single-view translation benchmarks and, for the first time, physically and stylistically consistent multi-view translations, including challenging egocentric and heterogeneous-camera setups central to world randomization for robot learning.
Problem

Research questions and friction points this paper is trying to address.

video-to-video translation
multi-view consistency
embodied agents
cross-view attention
world randomization
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-view video-to-video translation
4D latent space
diffusion timestep conditioning
view-consistent synthesis
embodied AI
🔎 Similar Papers
No similar papers found.
George Eskandar
George Eskandar
University of Stuttgart
Computer VisionDomain AdaptationGenerative AIAutonomous Driving3D Reconsruction
F
Fengyi Shen
Huawei Heisenberg Research Center
M
Mohammad Altillawi
Huawei Heisenberg Research Center
Dong Chen
Dong Chen
Ph.D. of Computer Science, University of Rochester; Huawei (present)
program synthesisprogram analysisprogramming systemscomputer architecture
Y
Yang Bai
Huawei Heisenberg Research Center, Ludwig Maximilian University of Munich, Munich Center for Machine Learning (MCML)
L
Liudi Yang
Huawei Heisenberg Research Center, University of Freiburg
Ziyuan Liu
Ziyuan Liu
Unknown affiliation
RoboticsManipulation and GraspingComputer VisionMachine Learning