🤖 AI Summary
Existing embodied reasoning approaches rely on fixed templates, which often introduce irrelevant information and hinder action prediction performance. This work proposes the first self-supervised embodied reasoning framework that operates without handcrafted templates or annotations, formulating reasoning as a latent variable within importance-weighted variational inference. The framework automatically distills task-specific reasoning strategies from internet-scale knowledge and effectively grounds this knowledge into physical execution through a multimodal Vision-Language-Action (VLA) architecture with 1B–30B parameters. By significantly enhancing the guidance of reasoning for action prediction, the method achieves substantial improvements across diverse embodied platforms—including robotic arms, bipedal and quadrupedal robots, and autonomous driving—yielding a 28% increase in task success rate, a 101% improvement in navigation scores, and a 21% reduction in collision rates, markedly outperforming baseline models.
📝 Abstract
Embodied Chain-of-Thought (CoT) reasoning has significantly enhanced Vision-Language-Action (VLA) models, yet current methods rely on rigid templates to specify reasoning primitives (e.g., objects in the scene, high-level plans, structural affordances). These templates can force policies to process irrelevant information that distracts from critical action-prediction signals. This creates a bottleneck: without successful policies, we cannot verify reasoning quality; without quality reasoning, we cannot build robust policies. We introduce R&B-EnCoRe, which enables models to bootstrap embodied reasoning from internet-scale knowledge through self-supervised refinement. By treating reasoning as a latent variable within importance-weighted variational inference, models can generate and distill a refined reasoning training dataset of embodiment-specific strategies without external rewards, verifiers, or human annotation. We validate R&B-EnCoRe across manipulation (Franka Panda in simulation, WidowX in hardware), legged navigation (bipedal, wheeled, bicycle, quadruped), and autonomous driving embodiments using various VLA architectures with 1B, 4B, 7B, and 30B parameters. Our approach achieves 28% gains in manipulation success, 101% improvement in navigation scores, and 21% reduction in collision-rate metric over models that indiscriminately reason about all available primitives. R&B-EnCoRe enables models to distill reasoning that is predictive of successful control, bypassing manual annotation engineering while grounding internet-scale knowledge in physical execution.