Non-Markovian Long-Horizon Robot Manipulation via Keyframe Chaining

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing vision–language–action (VLA) models struggle with long-horizon robotic manipulation tasks exhibiting non-Markovian dependencies, as they rely solely on current observations and neglect crucial historical states. To overcome this limitation, we propose a keyframe chaining framework that explicitly models long-term non-Markovian dependencies within VLA for the first time. Our approach employs a learnable keyframe selector to automatically identify salient state transitions and introduces a progress-aware query mechanism that dynamically retrieves history-relevant keyframes aligned with the current execution phase. These retrieved keyframes are integrated into the policy network as interleaved visual tokens. Evaluated on four newly designed non-Markovian tasks in the ManiSkill simulation environment, our method significantly outperforms baseline approaches, substantially improving success rates in long-horizon manipulation.

Technology Category

Application Category

📝 Abstract
Existing Vision-Language-Action (VLA) models often struggle to generalize to long-horizon tasks due to their heavy reliance on immediate observations. While recent studies incorporate retrieval mechanisms or extend context windows to handle procedural tasks, they often struggle to capture Non-Markovian dependencies, where optimal actions rely solely on specific past states rather than the current observation. To address this, we introduce Keyframe-Chaining VLA, a framework that extracts and links key historical frames to model long-horizon dependencies. Specifically, we propose an automatic keyframe selector that learns a discriminative embedding space, effectively identifying distinct state transitions. To capture task-critical information, we design a progress-aware query mechanism that dynamically retrieves historical frames based on their temporal relevance to the current execution phase. These selected keyframes are integrated into the VLA as interleaved visual tokens, explicitly grounding the policy in the long-horizon temporal context. Finally, we introduce a suite of four Non-Markovian manipulation tasks built upon the ManiSkill simulator to measure task success rates. Experimental results demonstrate that our method achieves superior performance, effectively tackling robot manipulation tasks characterized by long-horizon temporal dependencies. Code is available at https://github.com/cytoplastm/KC-VLA.
Problem

Research questions and friction points this paper is trying to address.

Non-Markovian
long-horizon
robot manipulation
VLA
temporal dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Keyframe Chaining
Non-Markovian Dependencies
Vision-Language-Action Models
Long-Horizon Manipulation
Progress-Aware Retrieval
🔎 Similar Papers
No similar papers found.