🤖 AI Summary
Existing world models struggle to effectively map visual or linguistic predictions to robot actions and are prone to error accumulation in long-horizon tasks, while traditional symbolic planners often operate disconnected from visual perception. To address these limitations, this work proposes a hierarchical world model (H-WM) that jointly learns an executable symbolic logic world model and a visual world model within a unified two-level architecture. The high-level component predicts symbolic state transitions, while the low-level component models the evolution of image observations; both are co-trained using aligned action–symbol–vision data. This design enables synchronized symbolic reasoning and embodied perception, substantially mitigating error propagation in long-horizon planning. Experiments demonstrate that integrating H-WM into a vision–language–action (VLA) policy significantly improves success rates and robustness on complex tasks.
📝 Abstract
World models are becoming central to robotic planning and control, as they enable prediction of future state transitions. Existing approaches often emphasize video generation or natural language prediction, which are difficult to directly ground in robot actions and suffer from compounding errors over long horizons. Traditional task and motion planning relies on symbolic logic world models, such as planning domains, that are robot-executable and robust for long-horizon reasoning. However, these methods typically operate independently of visual perception, preventing synchronized symbolic and perceptual state prediction. We propose a Hierarchical World Model (H-WM) that jointly predicts logical and visual state transitions within a unified bilevel framework. H-WM combines a high-level logical world model with a low-level visual world model, integrating the robot-executable, long-horizon robustness of symbolic reasoning with perceptual grounding from visual observations. The hierarchical outputs provide stable and consistent intermediate guidance for long-horizon tasks, mitigating error accumulation and enabling robust execution across extended task sequences. To train H-WM, we introduce a robotic dataset that aligns robot motion with symbolic states, actions, and visual observations. Experiments across vision-language-action (VLA) control policies demonstrate the effectiveness and generality of the approach.