π€ AI Summary
To address insufficient target mask accuracy in robotic grasping of occluded or visually similar products in convenience stores, this paper proposes CLIP-SAM, a collaborative segmentation framework. The method establishes the first image-text cross-modal reasoning pipeline for retail robotics, incorporating an attention-guided cross-modal alignment mechanism and introducing a retail-specific fine-tuning dataset alongside gradient-weighted mask optimization. By synergistically integrating CLIPβs semantic alignment capability, SAMβs zero-shot segmentation ability, and Grad-CAMβs gradient-based attention visualization, CLIP-SAM enhances fine-grained localization robustness via multimodal prompt engineering and domain-adaptive fine-tuning. Evaluated on a convenience store product dataset, the approach achieves a 12.7% improvement in mask IoU over baseline methods, significantly boosting segmentation accuracy under complex occlusion. The resulting high-fidelity masks are directly applicable to vision-based servo control of robotic manipulators.
π Abstract
This paper introduces a novel pipeline to enhance the precision of object masking for robotic manipulation within the specific domain of masking products in convenience stores. The approach integrates two advanced AI models, CLIP and SAM, focusing on their synergistic combination and the effective use of multimodal data (image and text). Emphasis is placed on utilizing gradient-based attention mechanisms and customized datasets to fine-tune performance. While CLIP, SAM, and Grad-CAM are established components, their integration within this structured pipeline represents a significant contribution to the field. The resulting segmented masks, generated through this combined approach, can be effectively utilized as inputs for robotic systems, enabling more precise and adaptive object manipulation in the context of convenience store products.