Attention-Guided Integration of CLIP and SAM for Precise Object Masking in Robotic Manipulation

πŸ“… 2025-01-21
πŸ›οΈ IEEE/SICE International Symposium on System Integration
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address insufficient target mask accuracy in robotic grasping of occluded or visually similar products in convenience stores, this paper proposes CLIP-SAM, a collaborative segmentation framework. The method establishes the first image-text cross-modal reasoning pipeline for retail robotics, incorporating an attention-guided cross-modal alignment mechanism and introducing a retail-specific fine-tuning dataset alongside gradient-weighted mask optimization. By synergistically integrating CLIP’s semantic alignment capability, SAM’s zero-shot segmentation ability, and Grad-CAM’s gradient-based attention visualization, CLIP-SAM enhances fine-grained localization robustness via multimodal prompt engineering and domain-adaptive fine-tuning. Evaluated on a convenience store product dataset, the approach achieves a 12.7% improvement in mask IoU over baseline methods, significantly boosting segmentation accuracy under complex occlusion. The resulting high-fidelity masks are directly applicable to vision-based servo control of robotic manipulators.

Technology Category

Application Category

πŸ“ Abstract
This paper introduces a novel pipeline to enhance the precision of object masking for robotic manipulation within the specific domain of masking products in convenience stores. The approach integrates two advanced AI models, CLIP and SAM, focusing on their synergistic combination and the effective use of multimodal data (image and text). Emphasis is placed on utilizing gradient-based attention mechanisms and customized datasets to fine-tune performance. While CLIP, SAM, and Grad-CAM are established components, their integration within this structured pipeline represents a significant contribution to the field. The resulting segmented masks, generated through this combined approach, can be effectively utilized as inputs for robotic systems, enabling more precise and adaptive object manipulation in the context of convenience store products.
Problem

Research questions and friction points this paper is trying to address.

Enhance object masking precision for robotic manipulation.
Integrate CLIP and SAM for multimodal data synergy.
Utilize attention mechanisms for adaptive object manipulation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates CLIP and SAM models
Uses gradient-based attention mechanisms
Custom datasets for fine-tuning
πŸ”Ž Similar Papers
No similar papers found.
M
Muhammad A. Muttaqien
Automation Research Team, National Institute of AIST, Tokyo, Japan
Tomohiro Motoda
Tomohiro Motoda
National Institute of Advanced Industrial Science and Technology (AIST)
Robotic manipulationdeep learning
R
Ryo Hanai
Automation Research Team, National Institute of AIST, Tokyo, Japan
Y
Y. Domae
Automation Research Team, National Institute of AIST, Tokyo, Japan