🤖 AI Summary
To address insufficient multimodal information utilization, model redundancy, and deployment challenges in embodied robotic affordance localization, this paper proposes BiT-Align—a lightweight image-depth-text joint alignment framework. It introduces a parameter-free bypass depth prompting module (BPM) for cross-modal fusion and a text-feature-guided (TFG) attention mechanism that dynamically selects affordance-relevant regions and semantically aligned attention heads. Built upon a dual-stream ViT architecture, BiT-Align models depth maps as learnable visual prompts and employs end-to-end contrastive learning for optimization. On AGD20K, BiT-Align achieves a 6.0% improvement in KLD score while reducing model parameters by 88.8%. On HICO-IIF, it demonstrates significantly enhanced cross-scene generalization capability and meets the computational constraints required for edge deployment.
📝 Abstract
Affordance refers to the functional properties that an agent perceives and utilizes from its environment, and is key perceptual information required for robots to perform actions. This information is rich and multimodal in nature. Existing multimodal affordance methods face limitations in extracting useful information, mainly due to simple structural designs, basic fusion methods, and large model parameters, making it difficult to meet the performance requirements for practical deployment. To address these issues, this paper proposes the BiT-Align image-depth-text affordance mapping framework. The framework includes a Bypass Prompt Module (BPM) and a Text Feature Guidance (TFG) attention selection mechanism. BPM integrates the auxiliary modality depth image directly as a prompt to the primary modality RGB image, embedding it into the primary modality encoder without introducing additional encoders. This reduces the model's parameter count and effectively improves functional region localization accuracy. The TFG mechanism guides the selection and enhancement of attention heads in the image encoder using textual features, improving the understanding of affordance characteristics. Experimental results demonstrate that the proposed method achieves significant performance improvements on public AGD20K and HICO-IIF datasets. On the AGD20K dataset, compared with the current state-of-the-art method, we achieve a 6.0% improvement in the KLD metric, while reducing model parameters by 88.8%, demonstrating practical application values. The source code will be made publicly available at https://github.com/DAWDSE/BiT-Align.