Think Proprioceptively: Embodied Visual Reasoning for VLA Manipulation

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes ThinkProprio, a novel approach that integrates proprioceptive signals into vision-language-action (VLA) models by encoding them as textual tokens and fusing them with task instructions at the input stage within the vision-language model’s embedding space. Unlike conventional VLA models that treat proprioception as a late-stage conditioning signal—limiting their capacity for state-aware early instruction comprehension and visual attention—ThinkProprio enables embodied state information to guide subsequent visual reasoning and dynamically select critical visual tokens. Evaluated on CALVIN, LIBERO, and real-world manipulation tasks, the method matches or exceeds strong baselines in performance while reducing end-to-end inference latency by over 50%. Notably, it achieves full-token-level accuracy using only 15% of the visual tokens, substantially improving both computational efficiency and task effectiveness.

Technology Category

Application Category

📝 Abstract
Vision-language-action (VLA) models typically inject proprioception only as a late conditioning signal, which prevents robot state from shaping instruction understanding and from influencing which visual tokens are attended throughout the policy. We introduce ThinkProprio, which converts proprioception into a sequence of text tokens in the VLM embedding space and fuses them with the task instruction at the input. This early fusion lets embodied state participate in subsequent visual reasoning and token selection, biasing computation toward action-critical evidence while suppressing redundant visual tokens. In a systematic ablation over proprioception encoding, state entry point, and action-head conditioning, we find that text tokenization is more effective than learned projectors, and that retaining roughly 15% of visual tokens can match the performance of using the full token set. Across CALVIN, LIBERO, and real-world manipulation, ThinkProprio matches or improves over strong baselines while reducing end-to-end inference latency over 50%.
Problem

Research questions and friction points this paper is trying to address.

proprioception
vision-language-action
embodied reasoning
visual token selection
instruction understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

proprioception
vision-language-action
early fusion
visual token selection
embodied reasoning
🔎 Similar Papers
No similar papers found.