🤖 AI Summary
Existing vision-based robotic learning relies on high-dimensional, computationally expensive inputs (e.g., raw images or point clouds) or hand-crafted, task-specific keypoints, resulting in strong background interference, poor generalization, and weak semantic understanding.
Method: We propose the first task-agnostic, embodiment-agnostic, and prior-free lightweight semantic keypoint-driven framework: (1) operability-aware region filtering to eliminate redundancy; (2) category-level feature distillation for semantically consistent keypoint representation; and (3) a gated Transformer architecture for sparse keypoint-guided policy learning. Our method autonomously localizes semantic keypoints from a single image and text prompt using only a 38-dimensional state space.
Contribution/Results: Trained for just 15 minutes, our approach achieves 82% success rate on unseen objects, categories, backgrounds, and distractor scenes—demonstrating substantial improvements in data efficiency, real-time inference, and cross-task generalization.
📝 Abstract
Vision-based robot learning often relies on dense image or point-cloud inputs, which are computationally heavy and entangle irrelevant background features. Existing keypoint-based approaches can focus on manipulation-centric features and be lightweight, but either depend on manual heuristics or task-coupled selection, limiting scalability and semantic understanding. To address this, we propose AFFORD2ACT, an affordance-guided framework that distills a minimal set of semantic 2D keypoints from a text prompt and a single image. AFFORD2ACT follows a three-stage pipeline: affordance filtering, category-level keypoint construction, and transformer-based policy learning with embedded gating to reason about the most relevant keypoints, yielding a compact 38-dimensional state policy that can be trained in 15 minutes, which performs well in real-time without proprioception or dense representations. Across diverse real-world manipulation tasks, AFFORD2ACT consistently improves data efficiency, achieving an 82% success rate on unseen objects, novel categories, backgrounds, and distractors.