Learning Visual Affordance from Audio

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces audio-visual affordance localization: a novel task that leverages action-associated sounds to precisely segment object interaction regions in images, overcoming ambiguities and occlusions inherent in text instructions or demonstration videos. To support this task, we construct the first benchmark dataset featuring synchronized action sounds, RGB images, and pixel-level interaction masks. We further propose AVAGFormer, a transformer-based architecture featuring a semantic-conditioned cross-modal hybrid mechanism and a dual-head decoder, enabling end-to-end audio-visual feature fusion and zero-shot generalization. Experiments demonstrate that our method significantly outperforms existing baselines on this new task. Crucially, it provides the first empirical validation of action sounds as highly discriminative and practical cues for visual affordance understanding. Our work establishes a new paradigm for multimodal embodied perception, bridging auditory signals and visual interaction semantics in a unified framework.

Technology Category

Application Category

📝 Abstract
We introduce Audio-Visual Affordance Grounding (AV-AG), a new task that segments object interaction regions from action sounds. Unlike existing approaches that rely on textual instructions or demonstration videos, which often limited by ambiguity or occlusion, audio provides real-time, semantically rich, and visually independent cues for affordance grounding, enabling more intuitive understanding of interaction regions. To support this task, we construct the first AV-AG dataset, comprising a large collection of action sounds, object images, and pixel-level affordance annotations. The dataset also includes an unseen subset to evaluate zero-shot generalization. Furthermore, we propose AVAGFormer, a model equipped with a semantic-conditioned cross-modal mixer and a dual-head decoder that effectively fuses audio and visual signals for mask prediction. Experiments show that AVAGFormer achieves state-of-the-art performance on AV-AG, surpassing baselines from related tasks. Comprehensive analyses highlight the distinctions between AV-AG and AVS, the benefits of end-to-end modeling, and the contribution of each component. Code and dataset have been released on https://jscslld.github.io/AVAGFormer/.
Problem

Research questions and friction points this paper is trying to address.

Segment object interaction regions from action sounds
Fuse audio and visual signals for mask prediction
Evaluate zero-shot generalization on unseen data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-Visual Affordance Grounding task for segmentation
AVAGFormer model with cross-modal mixer and decoder
First AV-AG dataset with pixel-level annotations
🔎 Similar Papers
No similar papers found.
Lidong Lu
Lidong Lu
Nanjing University
Multimodal Large Language Model
G
Guo Chen
Nanjing University
Z
Zhu Wei
China Mobile Communications Company Limited Research Institute
Yicheng Liu
Yicheng Liu
Tsinghua University
Robotics
T
Tong Lu
Nanjing University