🤖 AI Summary
Behavior cloning suffers from poor generalization due to scarcity of expert demonstrations, while existing video prediction models lack control-input awareness and struggle to support precise manipulation tasks. To address these limitations, this paper proposes Dynamic Alignment Flow Matching (DAP), a novel framework that establishes a mutual feedback loop between a policy network and a dynamics model. DAP employs a dynamics-aware architecture to jointly optimize action generation and state evolution, and introduces self-correcting dynamic alignment during inference to enhance out-of-distribution (OOD) adaptability. Evaluated on real-world robotic manipulation tasks, DAP significantly outperforms mainstream baselines—particularly under OOD conditions such as visual occlusions and lighting variations—demonstrating superior robustness and generalization capability.
📝 Abstract
Behavior cloning methods for robot learning suffer from poor generalization due to limited data support beyond expert demonstrations. Recent approaches leveraging video prediction models have shown promising results by learning rich spatiotemporal representations from large-scale datasets. However, these models learn action-agnostic dynamics that cannot distinguish between different control inputs, limiting their utility for precise manipulation tasks and requiring large pretraining datasets. We propose a Dynamics-Aligned Flow Matching Policy (DAP) that integrates dynamics prediction into policy learning. Our method introduces a novel architecture where policy and dynamics models provide mutual corrective feedback during action generation, enabling self-correction and improved generalization. Empirical validation demonstrates generalization performance superior to baseline methods on real-world robotic manipulation tasks, showing particular robustness in OOD scenarios including visual distractions and lighting variations.