🤖 AI Summary
This study addresses the challenge of high-precision 3D object localization for human-robot interaction by integrating monocular RGB images, natural language instructions, and robot state information. To this end, the authors propose an end-to-end framework built upon a pretrained vision-language model (VLM), enhanced with QLoRA-based efficient fine-tuning, a custom regression head, and a conditional routing mechanism. This design preserves the VLM’s general visual understanding capabilities while introducing dedicated 3D localization functionality. The work introduces a heterogeneous dataset comprising over 100,000 samples and demonstrates strong empirical performance, achieving a median absolute error of 13 mm—representing a fivefold improvement over the unmodified baseline. Notably, approximately 25% of predictions meet the accuracy threshold required for direct robotic manipulation.
📝 Abstract
Pre-trained general-purpose Vision-Language Models (VLM) hold the potential to enhance intuitive human-machine interactions due to their rich world knowledge and 2D object detection capabilities. However, VLMs for 3D coordinates detection tasks are rare. In this work, we investigate interactive abilities of VLMs by returning 3D object positions given a monocular RGB image from a wrist-mounted camera, natural language input, and robot states. We collected and curated a heterogeneous dataset of more than 100,000 images and finetuned a VLM using QLoRA with a custom regression head. By implementing conditional routing, our model maintains its ability to process general visual queries while adding specialized 3D position estimation capabilities. Our results demonstrate robust predictive performance with a median MAE of 13 mm on the test set and a five-fold improvement over a simpler baseline without finetuning. In about 25% of the cases, predictions are within a range considered acceptable for the robot to interact with objects.