🤖 AI Summary
To address the high computational cost and poor deployability of multimodal large models in language-guided robotic grasping, this paper proposes a lightweight, parameter-efficient fine-tuning framework built upon CLIP. Methodologically, it introduces a bidirectional vision-language adapter for pixel-level semantic alignment and incorporates an RGB-D fusion branch to integrate geometric priors, thereby enhancing grasping robustness. The unified framework supports three core tasks: referring expression segmentation, referring grasp synthesis, and functional object recognition. Experiments demonstrate that our approach significantly outperforms full-model fine-tuning and existing parameter-efficient tuning (PET) methods on REF-COCO+. On the RGS and RGA benchmarks, it achieves accurate parsing of simple instructions while effectively handling complex spatial reasoning scenarios—such as disambiguating multiple identical objects—exhibiting both computational efficiency and strong generalization capability.
📝 Abstract
The language-guided robot grasping task requires a robot agent to integrate multimodal information from both visual and linguistic inputs to predict actions for target-driven grasping. While recent approaches utilizing Multimodal Large Language Models (MLLMs) have shown promising results, their extensive computation and data demands limit the feasibility of local deployment and customization. To address this, we propose a novel CLIP-based multimodal parameter-efficient tuning (PET) framework designed for three language-guided object grounding and grasping tasks: (1) Referring Expression Segmentation (RES), (2) Referring Grasp Synthesis (RGS), and (3) Referring Grasp Affordance (RGA). Our approach introduces two key innovations: a bi-directional vision-language adapter that aligns multimodal inputs for pixel-level language understanding and a depth fusion branch that incorporates geometric cues to facilitate robot grasping predictions. Experiment results demonstrate superior performance in the RES object grounding task compared with existing CLIP-based full-model tuning or PET approaches. In the RGS and RGA tasks, our model not only effectively interprets object attributes based on simple language descriptions but also shows strong potential for comprehending complex spatial reasoning scenarios, such as multiple identical objects present in the workspace.