🤖 AI Summary
Current vision-language models (VLMs) excel at semantic and commonsense reasoning but struggle with physical causality and contact dynamics due to the absence of dynamic interaction in their training data, limiting their applicability to fine-grained robotic manipulation. To address this, we propose a training-free simulation-augmented reasoning framework: given a single RGB-D frame, it instantaneously constructs a physically grounded simulation environment and tightly couples VLM-based language reasoning with physics-based dynamics prediction within a closed-loop receding-horizon optimization loop—endowing the model with embodied physical consequence prediction. By unifying semantic understanding with high-fidelity physical simulation, our approach achieves state-of-the-art performance across five real-world manipulation tasks involving both rigid and deformable objects. Notably, it is the first method to enable end-to-end planning over complex physical interactions using off-the-shelf VLMs—without any fine-tuning.
📝 Abstract
Vision-Language Models (VLMs) exhibit remarkable common-sense and semantic reasoning capabilities. However, they lack a grounded understanding of physical dynamics. This limitation arises from training VLMs on static internet-scale visual-language data that contain no causal interactions or action-conditioned changes. Consequently, it remains challenging to leverage VLMs for fine-grained robotic manipulation tasks that require physical understanding, reasoning, and corresponding action planning. To overcome this, we present SIMPACT, a test-time, SIMulation-enabled ACTion Planning framework that equips VLMs with physical reasoning through simulation-in-the-loop world modeling, without requiring any additional training. From a single RGB-D observation, SIMPACT efficiently constructs physics simulations, enabling the VLM to propose informed actions, observe simulated rollouts, and iteratively refine its reasoning. By integrating language reasoning with physics prediction, our simulation-enabled VLM can understand contact dynamics and action outcomes in a physically grounded way. Our method demonstrates state-of-the-art performance on five challenging, real-world rigid-body and deformable manipulation tasks that require fine-grained physical reasoning, outperforming existing general-purpose robotic manipulation models. Our results demonstrate that embedding physics understanding via efficient simulation into VLM reasoning at test time offers a promising path towards generalizable embodied intelligence. Project webpage can be found at https://simpact-bot.github.io