H-WM: Robotic Task and Motion Planning Guided by Hierarchical World Model

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing world models struggle to effectively map visual or linguistic predictions to robot actions and are prone to error accumulation in long-horizon tasks, while traditional symbolic planners often operate disconnected from visual perception. To address these limitations, this work proposes a hierarchical world model (H-WM) that jointly learns an executable symbolic logic world model and a visual world model within a unified two-level architecture. The high-level component predicts symbolic state transitions, while the low-level component models the evolution of image observations; both are co-trained using aligned action–symbol–vision data. This design enables synchronized symbolic reasoning and embodied perception, substantially mitigating error propagation in long-horizon planning. Experiments demonstrate that integrating H-WM into a vision–language–action (VLA) policy significantly improves success rates and robustness on complex tasks.

Technology Category

Application Category

📝 Abstract
World models are becoming central to robotic planning and control, as they enable prediction of future state transitions. Existing approaches often emphasize video generation or natural language prediction, which are difficult to directly ground in robot actions and suffer from compounding errors over long horizons. Traditional task and motion planning relies on symbolic logic world models, such as planning domains, that are robot-executable and robust for long-horizon reasoning. However, these methods typically operate independently of visual perception, preventing synchronized symbolic and perceptual state prediction. We propose a Hierarchical World Model (H-WM) that jointly predicts logical and visual state transitions within a unified bilevel framework. H-WM combines a high-level logical world model with a low-level visual world model, integrating the robot-executable, long-horizon robustness of symbolic reasoning with perceptual grounding from visual observations. The hierarchical outputs provide stable and consistent intermediate guidance for long-horizon tasks, mitigating error accumulation and enabling robust execution across extended task sequences. To train H-WM, we introduce a robotic dataset that aligns robot motion with symbolic states, actions, and visual observations. Experiments across vision-language-action (VLA) control policies demonstrate the effectiveness and generality of the approach.
Problem

Research questions and friction points this paper is trying to address.

world model
task and motion planning
symbolic reasoning
visual perception
long-horizon planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical World Model
Task and Motion Planning
Symbolic-Perceptual Integration
Long-horizon Robotic Planning
Visual Grounding
🔎 Similar Papers
No similar papers found.
Wenyuan Chen
Wenyuan Chen
University of Toronto
Computer visionRoboticsMedical ImagingDeep learning
J
Jinbang Huang
Huawei Noah’s Ark Lab, University of Toronto
O
Oscar Pang
Huawei Noah’s Ark Lab, University of Toronto
Z
Zhiyuan Li
Huawei Noah’s Ark Lab, University of Toronto
X
Xiao Hu
Huawei Noah’s Ark Lab
Lingfeng Zhang
Lingfeng Zhang
PhD student at Tsinghua University
embodied ai
Z
Zhanguang Zhang
Huawei Noah’s Ark Lab
Mark Coates
Mark Coates
Professor of Electrical Engineering, McGill University
Signal ProcessingComputer Networks
Tongtong Cao
Tongtong Cao
Researcher, Huawei Noah's Ark Lab
RoboticsEmbodied AIAutonomous driving
X
Xingyue Quan
Huawei Noah’s Ark Lab
Yingxue Zhang
Yingxue Zhang
Huawei
Graph representation learningGraph ReasoningLLMs ReasoningKnowledge GraphsRecommender Systems