BPP: Long-Context Robot Imitation Learning by Focusing on Key History Frames

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in robotic imitation learning from long-horizon demonstrations, where training data often fail to adequately cover the exponentially growing history space and are susceptible to spurious correlations, leading to poor generalization during deployment. To mitigate this, the authors propose Big Picture Policies (BPP), which leverage vision-language models to automatically identify task-critical frames and construct a compact, semantically meaningful history representation. By replacing the full observation history with a minimal set of keyframes, BPP fundamentally alleviates the coverage bottleneck inherent in long-history settings. The approach is evaluated across four real-world manipulation tasks and three simulated environments, achieving a 70% relative improvement in success rate over the strongest baseline in physical-world experiments.

Technology Category

Application Category

📝 Abstract
Many robot tasks require attending to the history of past observations. For example, finding an item in a room requires remembering which places have already been searched. However, the best-performing robot policies typically condition only on the current observation, limiting their applicability to such tasks. Naively conditioning on past observations often fails due to spurious correlations: policies latch onto incidental features of training histories that do not generalize to out-of-distribution trajectories upon deployment. We analyze why policies latch onto these spurious correlations and find that this problem stems from limited coverage over the space of possible histories during training, which grows exponentially with horizon. Existing regularization techniques provide inconsistent benefits across tasks, as they do not fundamentally address this coverage problem. Motivated by these findings, we propose Big Picture Policies (BPP), an approach that conditions on a minimal set of meaningful keyframes detected by a vision-language model. By projecting diverse rollouts onto a compact set of task-relevant events, BPP substantially reduces distribution shift between training and deployment, without sacrificing expressivity. We evaluate BPP on four challenging real-world manipulation tasks and three simulation tasks, all requiring history conditioning. BPP achieves 70% higher success rates than the best comparison on real-world evaluations.
Problem

Research questions and friction points this paper is trying to address.

long-context
robot imitation learning
history conditioning
spurious correlations
distribution shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

long-context imitation learning
keyframe selection
vision-language model
distribution shift reduction
robot manipulation
🔎 Similar Papers
No similar papers found.