RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing affordance prediction methods struggle in unseen environments due to sparse retrieval, limited generalization, and inaccurate contact point localization. This work proposes the RAAP framework, which innovatively integrates retrieval augmentation with cross-image action alignment. By decoupling static contact point localization from dynamic action direction prediction, RAAP leverages dense image correspondences to transfer contact points and introduces a dual-weighted attention mechanism to fuse multiple reference samples for robust action direction estimation. Requiring only minimal training data, the method achieves strong affordance prediction on novel objects and categories. Trained on small-scale subsets of DROID and HOI4D, RAAP successfully enables zero-shot robotic manipulation in both simulation and real-world settings, significantly enhancing cross-category generalization and robustness.
📝 Abstract
Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world. Project website: https://github.com/SEU-VIPGroup/RAAP.
Problem

Research questions and friction points this paper is trying to address.

affordance prediction
retrieval
generalization
contact localization
unseen categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented Learning
Affordance Prediction
Cross-Image Alignment
Zero-Shot Manipulation
Dense Correspondence
🔎 Similar Papers
No similar papers found.
Q
Qiyuan Zhuang
School of Computer Science and Engineering, and Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Southeast University, Nanjing 211189, China
H
He-Yang Xu
School of Computer Science and Engineering, and Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Southeast University, Nanjing 211189, China
Yijun Wang
Yijun Wang
Southeast University
Machine LearningQuantumVision Model
X
Xin-Yang Zhao
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Y
Yang-Yang Li
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Xiu-Shen Wei
Xiu-Shen Wei
Professor, Southeast University
Computer VisionMachine LearningArtificial Intelligence