🤖 AI Summary
Existing language-driven object interaction region prediction models tightly couple high-level reasoning with low-level perception, rely heavily on annotated data, and exhibit poor generalization. Method: We propose the first zero-shot, decoupled three-stage agent framework—Dreamer (generates interaction visualizations), Thinker (reasons about target object parts), and Spotter (precisely localizes interaction points)—that eliminates end-to-end supervised training. Instead, it leverages test-time collaboration among a generative model, a large vision-language model, and a vision foundation model for training-free inference. Contribution/Results: Our framework achieves unprecedented cross-object and cross-environment generalization, significantly outperforming supervised state-of-the-art methods on multiple benchmarks. It demonstrates robust zero-shot capability in real-world scenarios, establishing a new paradigm for language-guided interaction localization without task-specific training or labeled data.
📝 Abstract
Affordance prediction, which identifies interaction regions on objects based on language instructions, is critical for embodied AI. Prevailing end-to-end models couple high-level reasoning and low-level grounding into a single monolithic pipeline and rely on training over annotated datasets, which leads to poor generalization on novel objects and unseen environments. In this paper, we move beyond this paradigm by proposing A4-Agent, a training-free agentic framework that decouples affordance prediction into a three-stage pipeline. Our framework coordinates specialized foundation models at test time: (1) a $ extbf{Dreamer}$ that employs generative models to visualize $ extit{how}$ an interaction would look; (2) a $ extbf{Thinker}$ that utilizes large vision-language models to decide $ extit{what}$ object part to interact with; and (3) a $ extbf{Spotter}$ that orchestrates vision foundation models to precisely locate $ extit{where}$ the interaction area is. By leveraging the complementary strengths of pre-trained models without any task-specific fine-tuning, our zero-shot framework significantly outperforms state-of-the-art supervised methods across multiple benchmarks and demonstrates robust generalization to real-world settings.