🤖 AI Summary
To address the challenge of enabling embodied agents to perform multi-step, multimodal complex tasks in dynamic physical environments, this paper introduces CLIER—the first closed-loop embodied reasoning system. CLIER employs a modular neuro-symbolic reasoning framework that jointly supports natural language instruction understanding, visual perception, non-visual property estimation (e.g., object weight), online belief updating, and action re-planning. We develop a MuJoCo-Blender co-simulation environment enabling high-fidelity physics interaction and photorealistic rendering. Furthermore, we establish the first benchmark encompassing ten categories of multi-step embodied reasoning tasks. Experiments demonstrate that CLIER achieves over 76% task success rate in simulation and 64% on a real robotic arm, significantly improving robustness against environmental disturbances, sensor noise, and actuation uncertainty, as well as cross-task generalization capability.
📝 Abstract
Embodied reasoning systems integrate robotic hardware and cognitive processes to perform complex tasks typically in response to a natural language query about a specific physical environment. This usually involves changing the belief about the scene or physically interacting and changing the scene (e.g. 'Sort the objects from lightest to heaviest'). In order to facilitate the development of such systems we introduce a new simulating environment that makes use of MuJoCo physics engine and high-quality renderer Blender to provide realistic visual observations that are also accurate to the physical state of the scene. Together with the simulator we propose a new benchmark composed of 10 classes of multi-step reasoning scenarios that require simultaneous visual and physical measurements. Finally, we develop a new modular Closed Loop Interactive Reasoning (CLIER) approach that takes into account the measurements of non-visual object properties, changes in the scene caused by external disturbances as well as uncertain outcomes of robotic actions. We extensively evaluate our reasoning approach in simulation and in the real world manipulation tasks with a success rate above 76% and 64%, respectively.