π€ AI Summary
To address distribution shift arising from scale discrepancies in continuous action spaces across robotic platforms, this work proposes a natural-language-based directional action representation. It discretizes continuous actions into semantically grounded directional descriptions (e.g., βgently push leftβ), decoupling action semantics from platform-specific numerical values and thereby mitigating inter-modal feature distance mismatches and cross-task distribution misalignment. We introduce a novel semantic-driven language model architecture, trained via multi-task learning to jointly encode actions and language into a unified representational space on two established benchmarks. Experiments demonstrate that our representation significantly improves policy generalization across heterogeneous robotic platforms and tasks, as well as knowledge transfer efficiency. This approach establishes a new paradigm for representation alignment in robot control, shifting focus from numeric action spaces to semantically invariant, language-grounded action abstractions.
π Abstract
Recent end-to-end robotic manipulation research increasingly adopts architectures inspired by large language models to enable robust manipulation. However, a critical challenge arises from severe distribution shifts between robotic action data, primarily due to substantial numerical variations in action commands across diverse robotic platforms and tasks, hindering the effective transfer of pretrained knowledge. To address this limitation, we propose a semantically grounded linguistic representation to normalize actions for efficient pretraining. Unlike conventional discretized action representations that are sensitive to numerical scales, the motion representation specifically disregards numeric scale effects, emphasizing directionality instead. This abstraction mitigates distribution shifts, yielding a more generalizable pretraining representation. Moreover, using the motion representation narrows the feature distance between action tokens and standard vocabulary tokens, mitigating modality gaps. Multi-task experiments on two benchmarks demonstrate that the proposed method significantly improves generalization performance and transferability in robotic manipulation tasks.