🤖 AI Summary
This work proposes a unified framework integrating a world model, an action expert, and a force predictor to enhance robotic prediction of physical outcomes and adaptive manipulation in dynamic environments. The framework employs a multimodal self-attention mechanism to enable deep feature interaction among modules and, for the first time, leverages a Flow Matching Diffusion Transformer for world-model-driven policy learning, supporting both modular and joint training. Furthermore, an online adaptive learning strategy (AdaOL) is introduced to dynamically switch between action generation and future imagination modes, enabling closed-loop real-time adaptation under visual and physical domain shifts. Evaluated across diverse simulated and real-world manipulation tasks, the system significantly outperforms existing methods, demonstrating exceptional robustness and adaptability—particularly in out-of-distribution dynamic scenarios.
📝 Abstract
Effective robotic manipulation requires policies that can anticipate physical outcomes and adapt to real-world environments. Effective robotic manipulation requires policies that can anticipate physical outcomes and adapt to real-world environments. In this work, we introduce a unified framework, World-Model-Driven Diffusion Policy with Online Adaptive Learning (AdaWorldPolicy) to enhance robotic manipulation under dynamic conditions with minimal human involvement. Our core insight is that world models provide strong supervision signals, enabling online adaptive learning in dynamic environments, which can be complemented by force-torque feedback to mitigate dynamic force shifts. Our AdaWorldPolicy integrates a world model, an action expert, and a force predictor-all implemented as interconnected Flow Matching Diffusion Transformers (DiT). They are interconnected via the multi-modal self-attention layers, enabling deep feature exchange for joint learning while preserving their distinct modularity characteristics. We further propose a novel Online Adaptive Learning (AdaOL) strategy that dynamically switches between an Action Generation mode and a Future Imagination mode to drive reactive updates across all three modules. This creates a powerful closed-loop mechanism that adapts to both visual and physical domain shifts with minimal overhead. Across a suite of simulated and real-robot benchmarks, our AdaWorldPolicy achieves state-of-the-art performance, with dynamical adaptive capacity to out-of-distribution scenarios.