T-Rex: Task-Adaptive Spatial Representation Extraction for Robotic Manipulation with Vision-Language Models

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language model (VLM)-driven robotic manipulation approaches rely on fixed spatial representations, which inherently compromise expressiveness and computational efficiency. Method: We propose a task-complexity-driven adaptive spatial representation mechanism—the first to dynamically select representation type and granularity based on task requirements—ensuring high perceptual fidelity while substantially reducing computational overhead. Our method integrates VLMs with multimodal perception through task intention analysis, spatial feature disentanglement, and lightweight representation generation, enabling plug-and-play deployment without additional training. Contribution/Results: Evaluated on real-world robotic platforms, our approach improves spatial understanding accuracy over fixed-representation baselines, accelerates inference by over 40% on average, and significantly enhances system robustness and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Building a general robotic manipulation system capable of performing a wide variety of tasks in real-world settings is a challenging task. Vision-Language Models (VLMs) have demonstrated remarkable potential in robotic manipulation tasks, primarily due to the extensive world knowledge they gain from large-scale datasets. In this process, Spatial Representations (such as points representing object positions or vectors representing object orientations) act as a bridge between VLMs and real-world scene, effectively grounding the reasoning abilities of VLMs and applying them to specific task scenarios. However, existing VLM-based robotic approaches often adopt a fixed spatial representation extraction scheme for various tasks, resulting in insufficient representational capability or excessive extraction time. In this work, we introduce T-Rex, a Task-Adaptive Framework for Spatial Representation Extraction, which dynamically selects the most appropriate spatial representation extraction scheme for each entity based on specific task requirements. Our key insight is that task complexity determines the types and granularity of spatial representations, and Stronger representational capabilities are typically associated with Higher overall system operation costs. Through comprehensive experiments in real-world robotic environments, we show that our approach delivers significant advantages in spatial understanding, efficiency, and stability without additional training.
Problem

Research questions and friction points this paper is trying to address.

Fixed spatial representation limits VLM-based robotic manipulation performance
Task complexity requires adaptive spatial representation extraction
Balancing representation capability and system cost in robotic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic spatial representation selection per task
Balances representation power and system cost
Enhances spatial understanding without extra training
🔎 Similar Papers
No similar papers found.