MAD: Motion Appearance Decoupling for efficient Driving World Models

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing general-purpose video diffusion models struggle to simultaneously capture structured motion dynamics and physical consistency in autonomous driving scenarios, while also incurring prohibitive fine-tuning costs. To address this, this work proposes a motion-appearance decoupled two-stage paradigm: first learning structured motion dynamics through a skeletal proxy, then generating photorealistic RGB videos conditioned on the learned motion. This approach dramatically reduces training overhead, achieving or surpassing prior state-of-the-art performance with less than 6% of the computational resources. The proposed MAD-LTX model supports multimodal control—including text prompts, ego-vehicle trajectories, and object-level specifications—enabling efficient and controllable modeling of driving scenes within an open-source framework.

Technology Category

Application Category

📝 Abstract
Recent video diffusion models generate photorealistic, temporally coherent videos, yet they fall short as reliable world models for autonomous driving, where structured motion and physically consistent interactions are essential. Adapting these generalist video models to driving domains has shown promise but typically requires massive domain-specific data and costly fine-tuning. We propose an efficient adaptation framework that converts generalist video diffusion models into controllable driving world models with minimal supervision. The key idea is to decouple motion learning from appearance synthesis. First, the model is adapted to predict structured motion in a simplified form: videos of skeletonized agents and scene elements, focusing learning on physical and social plausibility. Then, the same backbone is reused to synthesize realistic RGB videos conditioned on these motion sequences, effectively"dressing"the motion with texture and lighting. This two-stage process mirrors a reasoning-rendering paradigm: first infer dynamics, then render appearance. Our experiments show this decoupled approach is exceptionally efficient: adapting SVD, we match prior SOTA models with less than 6% of their compute. Scaling to LTX, our MAD-LTX model outperforms all open-source competitors, and supports a comprehensive suite of text, ego, and object controls. Project page: https://vita-epfl.github.io/MAD-World-Model/
Problem

Research questions and friction points this paper is trying to address.

world models
autonomous driving
video diffusion models
motion-appearance decoupling
domain adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion-Appearance Decoupling
Driving World Models
Video Diffusion Models
Efficient Adaptation
Structured Motion
🔎 Similar Papers
No similar papers found.