Bridging Scale Discrepancies in Robotic Control via Language-Based Action Representations

πŸ“… 2025-12-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address distribution shift arising from scale discrepancies in continuous action spaces across robotic platforms, this work proposes a natural-language-based directional action representation. It discretizes continuous actions into semantically grounded directional descriptions (e.g., β€œgently push left”), decoupling action semantics from platform-specific numerical values and thereby mitigating inter-modal feature distance mismatches and cross-task distribution misalignment. We introduce a novel semantic-driven language model architecture, trained via multi-task learning to jointly encode actions and language into a unified representational space on two established benchmarks. Experiments demonstrate that our representation significantly improves policy generalization across heterogeneous robotic platforms and tasks, as well as knowledge transfer efficiency. This approach establishes a new paradigm for representation alignment in robot control, shifting focus from numeric action spaces to semantically invariant, language-grounded action abstractions.

Technology Category

Application Category

πŸ“ Abstract
Recent end-to-end robotic manipulation research increasingly adopts architectures inspired by large language models to enable robust manipulation. However, a critical challenge arises from severe distribution shifts between robotic action data, primarily due to substantial numerical variations in action commands across diverse robotic platforms and tasks, hindering the effective transfer of pretrained knowledge. To address this limitation, we propose a semantically grounded linguistic representation to normalize actions for efficient pretraining. Unlike conventional discretized action representations that are sensitive to numerical scales, the motion representation specifically disregards numeric scale effects, emphasizing directionality instead. This abstraction mitigates distribution shifts, yielding a more generalizable pretraining representation. Moreover, using the motion representation narrows the feature distance between action tokens and standard vocabulary tokens, mitigating modality gaps. Multi-task experiments on two benchmarks demonstrate that the proposed method significantly improves generalization performance and transferability in robotic manipulation tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses distribution shifts in robotic action data
Normalizes actions using language-based representations
Improves generalization across robotic platforms and tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-based action representation normalizes robotic actions
Motion representation disregards numeric scale, emphasizes directionality
Narrows feature distance between action tokens and vocabulary tokens
πŸ”Ž Similar Papers
No similar papers found.
Yuchi Zhang
Yuchi Zhang
Santa Clara Univeristy
C
Churui Sun
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, Harbin, China
S
Shiqi Liang
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, Harbin, China
D
Diyuan Liu
State Key Laboratory of Cognitive Intelligence, iFLYTEK Research, China
C
Chao Ji
State Key Laboratory of Cognitive Intelligence, iFLYTEK Research, China
Wei-Nan Zhang
Wei-Nan Zhang
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, Harbin, China; Suzhou Research Institute, Harbin Institute of Technology, Suzhou, China
T
Ting Liu
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, Harbin, China