ATK: Automatic Task-driven Keypoint Selection for Robust Policy Learning

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual policies often suffer performance degradation when deployed across diverse environments due to visual domain shifts. To address this, we propose a task-driven, end-to-end learnable framework for automatic selection of minimal 2D keypoints—replacing hand-crafted features with a lightweight, robust state representation. Our method requires no object priors or manual annotations, jointly optimizing for task discriminability, behavioral predictability, and interpretability. It integrates a frozen pre-trained vision encoder, expert demonstration distillation, differentiable keypoint tracking, and a behavior-consistency-based selection mechanism. Evaluated on multiple robotic manipulation tasks, our approach reduces the number of required keypoints by over 60%, improves sim-to-real success rates by 32%, and demonstrates significantly enhanced robustness—outperforming state-of-the-art methods—on challenging scenarios involving transparent objects, deformable entities, and fine-grained manipulation.

Technology Category

Application Category

📝 Abstract
Visuomotor policies often suffer from perceptual challenges, where visual differences between training and evaluation environments degrade policy performance. Policies relying on state estimations, like 6D pose, require task-specific tracking and are difficult to scale, while raw sensor-based policies may lack robustness to small visual disturbances.In this work, we leverage 2D keypoints - spatially consistent features in the image frame - as a flexible state representation for robust policy learning and apply it to both sim-to-real transfer and real-world imitation learning. However, the choice of which keypoints to use can vary across objects and tasks. We propose a novel method, ATK, to automatically select keypoints in a task-driven manner so that the chosen keypoints are predictive of optimal behavior for the given task. Our proposal optimizes for a minimal set of keypoints that focus on task-relevant parts while preserving policy performance and robustness. We distill expert data (either from an expert policy in simulation or a human expert) into a policy that operates on RGB images while tracking the selected keypoints. By leveraging pre-trained visual modules, our system effectively encodes states and transfers policies to the real-world evaluation scenario despite wide scene variations and perceptual challenges such as transparent objects, fine-grained tasks, and deformable objects manipulation. We validate ATK on various robotic tasks, demonstrating that these minimal keypoint representations significantly improve robustness to visual disturbances and environmental variations. See all experiments and more details on our website.
Problem

Research questions and friction points this paper is trying to address.

Selecting task-specific keypoints for robust policy learning
Overcoming visual disturbances in sim-to-real transfer
Improving policy robustness with minimal keypoint sets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically selects task-driven 2D keypoints
Uses minimal keypoints for robust policy learning
Leverages pre-trained visual modules for transfer
🔎 Similar Papers
No similar papers found.