Resource-Efficient Affordance Grounding with Complementary Depth and Semantic Prompts

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient multimodal information utilization, model redundancy, and deployment challenges in embodied robotic affordance localization, this paper proposes BiT-Align—a lightweight image-depth-text joint alignment framework. It introduces a parameter-free bypass depth prompting module (BPM) for cross-modal fusion and a text-feature-guided (TFG) attention mechanism that dynamically selects affordance-relevant regions and semantically aligned attention heads. Built upon a dual-stream ViT architecture, BiT-Align models depth maps as learnable visual prompts and employs end-to-end contrastive learning for optimization. On AGD20K, BiT-Align achieves a 6.0% improvement in KLD score while reducing model parameters by 88.8%. On HICO-IIF, it demonstrates significantly enhanced cross-scene generalization capability and meets the computational constraints required for edge deployment.

Technology Category

Application Category

📝 Abstract
Affordance refers to the functional properties that an agent perceives and utilizes from its environment, and is key perceptual information required for robots to perform actions. This information is rich and multimodal in nature. Existing multimodal affordance methods face limitations in extracting useful information, mainly due to simple structural designs, basic fusion methods, and large model parameters, making it difficult to meet the performance requirements for practical deployment. To address these issues, this paper proposes the BiT-Align image-depth-text affordance mapping framework. The framework includes a Bypass Prompt Module (BPM) and a Text Feature Guidance (TFG) attention selection mechanism. BPM integrates the auxiliary modality depth image directly as a prompt to the primary modality RGB image, embedding it into the primary modality encoder without introducing additional encoders. This reduces the model's parameter count and effectively improves functional region localization accuracy. The TFG mechanism guides the selection and enhancement of attention heads in the image encoder using textual features, improving the understanding of affordance characteristics. Experimental results demonstrate that the proposed method achieves significant performance improvements on public AGD20K and HICO-IIF datasets. On the AGD20K dataset, compared with the current state-of-the-art method, we achieve a 6.0% improvement in the KLD metric, while reducing model parameters by 88.8%, demonstrating practical application values. The source code will be made publicly available at https://github.com/DAWDSE/BiT-Align.
Problem

Research questions and friction points this paper is trying to address.

Improves affordance grounding using depth and semantic prompts.
Reduces model parameters while enhancing localization accuracy.
Addresses limitations in multimodal affordance information extraction.
Innovation

Methods, ideas, or system contributions that make the work stand out.

BiT-Align framework integrates depth and RGB images
Bypass Prompt Module reduces model parameters
Text Feature Guidance enhances affordance understanding
🔎 Similar Papers
No similar papers found.
Y
Yizhou Huang
School of Robotics and the National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, China
F
Fan Yang
School of Robotics and the National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, China
G
Guoliang Zhu
School of Robotics and the National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, China
G
Gen Li
School of Informatics, University of Edinburgh, UK
H
Hao Shi
State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang University, China
Yukun Zuo
Yukun Zuo
Hunan University
Continual learningDomain adaptationActive learning
Wenrui Chen
Wenrui Chen
Hunan University
RoboticsHandsGraspingDexterous ManipulationHuman-Robot Collaboration
Zhiyong Li
Zhiyong Li
Professor of Computer Science, Hunan University
computer vision,object detection
Kailun Yang
Kailun Yang
Professor. School of Artificial Intelligence and Robotics, Hunan University (HNU); KIT; UAH; ZJU
Computer VisionComputational OpticsIntelligent VehiclesAutonomous DrivingRobotics