🤖 AI Summary
Existing world models struggle to achieve high-fidelity, long-horizon future video prediction under joint multimodal observation (e.g., vision, language) and heterogeneous action inputs (e.g., robotic manipulation, camera motion) in general-purpose settings. This paper introduces the first general-purpose interactive world model, built upon an autoregressive diffusion Transformer architecture. Key innovations include time-causal attention, noise-augmented historical memory, action-aware adapters, and a Mixture-of-Action-Experts (MoAE) routing mechanism—collectively balancing responsiveness and temporal consistency. The model autoregressively synthesizes high-fidelity, long-horizon video futures conditioned on multimodal history and diverse real-time action signals. It achieves significant improvements across benchmarks in prediction fidelity, long-range temporal coherence, and action-dynamics alignment. To our knowledge, this is the first work enabling general-purpose interactive world modeling for real-world applications such as autonomous driving and dexterous manipulation.
📝 Abstract
Recent advances in diffusion transformers have empowered video generation models to generate high-quality video clips from texts or images. However, world models with the ability to predict long-horizon futures from past observations and actions remain underexplored, especially for general-purpose scenarios and various forms of actions. To bridge this gap, we introduce Astra, an interactive general world model that generates real-world futures for diverse scenarios (e.g., autonomous driving, robot grasping) with precise action interactions (e.g., camera motion, robot action). We propose an autoregressive denoising architecture and use temporal causal attention to aggregate past observations and support streaming outputs. We use a noise-augmented history memory to avoid over-reliance on past frames to balance responsiveness with temporal coherence. For precise action control, we introduce an action-aware adapter that directly injects action signals into the denoising process. We further develop a mixture of action experts that dynamically route heterogeneous action modalities, enhancing versatility across diverse real-world tasks such as exploration, manipulation, and camera control. Astra achieves interactive, consistent, and general long-term video prediction and supports various forms of interactions. Experiments across multiple datasets demonstrate the improvements of Astra in fidelity, long-range prediction, and action alignment over existing state-of-the-art world models.