🤖 AI Summary
Existing vision-language model (VLM)-driven robotic manipulation approaches rely on fixed spatial representations, which inherently compromise expressiveness and computational efficiency. Method: We propose a task-complexity-driven adaptive spatial representation mechanism—the first to dynamically select representation type and granularity based on task requirements—ensuring high perceptual fidelity while substantially reducing computational overhead. Our method integrates VLMs with multimodal perception through task intention analysis, spatial feature disentanglement, and lightweight representation generation, enabling plug-and-play deployment without additional training. Contribution/Results: Evaluated on real-world robotic platforms, our approach improves spatial understanding accuracy over fixed-representation baselines, accelerates inference by over 40% on average, and significantly enhances system robustness and cross-task generalization capability.
📝 Abstract
Building a general robotic manipulation system capable of performing a wide variety of tasks in real-world settings is a challenging task. Vision-Language Models (VLMs) have demonstrated remarkable potential in robotic manipulation tasks, primarily due to the extensive world knowledge they gain from large-scale datasets. In this process, Spatial Representations (such as points representing object positions or vectors representing object orientations) act as a bridge between VLMs and real-world scene, effectively grounding the reasoning abilities of VLMs and applying them to specific task scenarios. However, existing VLM-based robotic approaches often adopt a fixed spatial representation extraction scheme for various tasks, resulting in insufficient representational capability or excessive extraction time. In this work, we introduce T-Rex, a Task-Adaptive Framework for Spatial Representation Extraction, which dynamically selects the most appropriate spatial representation extraction scheme for each entity based on specific task requirements. Our key insight is that task complexity determines the types and granularity of spatial representations, and Stronger representational capabilities are typically associated with Higher overall system operation costs. Through comprehensive experiments in real-world robotic environments, we show that our approach delivers significant advantages in spatial understanding, efficiency, and stability without additional training.