🤖 AI Summary
Existing robotic manipulation methods decouple observation prediction from action generation and neglect task objectives, leading to semantic inconsistency and behavioral discontinuity. This paper proposes a goal-driven hierarchical interaction framework that jointly optimizes actions and observations through a three-stage synergistic mechanism: (1) coarse-grained goal anchoring, (2) fine-grained observation synthesis, and (3) interaction-aware action refinement—ensuring semantic alignment and temporal coherence. We introduce, for the first time, a goal anchoring mechanism that integrates multimodal inputs with historical action memory to condition both the observation synthesis module and the interaction-aware action refinement module. Evaluated on both simulation and real-robot platforms, our approach achieves state-of-the-art performance, improving task success rate by 12.7% and significantly enhancing prediction consistency.
📝 Abstract
Unified video and action prediction models hold great potential for robotic manipulation, as future observations offer contextual cues for planning, while actions reveal how interactions shape the environment. However, most existing approaches treat observation and action generation in a monolithic and goal-agnostic manner, often leading to semantically misaligned predictions and incoherent behaviors. To this end, we propose H-GAR, a Hierarchical interaction framework via Goal-driven observation-Action Refinement.To anchor prediction to the task objective, H-GAR first produces a goal observation and a coarse action sketch that outline a high-level route toward the goal. To enable explicit interaction between observation and action under the guidance of the goal observation for more coherent decision-making, we devise two synergistic modules. (1) Goal-Conditioned Observation Synthesizer (GOS) synthesizes intermediate observations based on the coarse-grained actions and the predicted goal observation. (2) Interaction-Aware Action Refiner (IAAR) refines coarse actions into fine-grained, goal-consistent actions by leveraging feedback from the intermediate observations and a Historical Action Memory Bank that encodes prior actions to ensure temporal consistency. By integrating goal grounding with explicit action-observation interaction in a coarse-to-fine manner, H-GAR enables more accurate manipulation. Extensive experiments on both simulation and real-world robotic manipulation tasks demonstrate that H-GAR achieves state-of-the-art performance.