π€ AI Summary
This work addresses open-vocabulary mobile manipulation tasks, where robots must accurately localize and manipulate diverse objects in unseen indoor environments guided by free-form natural language instructions. We propose a hierarchical multimodal retrieval framework coupled with affordance-aware embodied memory modeling: for the first time, we decouple visual region semantics from physical affordances to construct an Affordance-Aware Embodied Memory; further integrating vision-language models, region-level image embeddings, a functional scoring network, and a hierarchical retrieval architecture to enable zero-shot hierarchical retrieval and re-ranking. Our method achieves significantly superior retrieval performance over state-of-the-art approaches on large-scale benchmarks; in real-robot experiments, it attains an 85% task success rateβmarking the first demonstration of highly robust, instruction-driven mobile manipulation under open-vocabulary conditions.
π Abstract
In this study, we address the problem of open-vocabulary mobile manipulation, where a robot is required to carry a wide range of objects to receptacles based on free-form natural language instructions. This task is challenging, as it involves understanding visual semantics and the affordance of manipulation actions. To tackle these challenges, we propose Affordance RAG, a zero-shot hierarchical multimodal retrieval framework that constructs Affordance-Aware Embodied Memory from pre-explored images. The model retrieves candidate targets based on regional and visual semantics and reranks them with affordance scores, allowing the robot to identify manipulation options that are likely to be executable in real-world environments. Our method outperformed existing approaches in retrieval performance for mobile manipulation instruction in large-scale indoor environments. Furthermore, in real-world experiments where the robot performed mobile manipulation in indoor environments based on free-form instructions, the proposed method achieved a task success rate of 85%, outperforming existing methods in both retrieval performance and overall task success.