OAT: Ordered Action Tokenization

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing autoregressive action modeling, which lacks efficient and structured discrete representations capable of simultaneously preserving sequence length, causal order, and decodability. To overcome this, we propose an ordered action tokenization method that, for the first time, unifies high compression ratio, full decodability, and strict causal ordering, enabling both autoregressive generation and prefix decoding while flexibly balancing inference efficiency against action fidelity. Our approach leverages a register-augmented Transformer architecture, finite scalar quantization (FSQ), and an order-inducing training mechanism to learn a structured action tokenizer. Evaluated across more than 20 tasks spanning four simulation benchmarks and real-world scenarios, our method significantly outperforms existing tokenization strategies and diffusion-based baselines.

Technology Category

Application Category

📝 Abstract
Autoregressive policies offer a compelling foundation for scalable robot learning by enabling discrete abstraction, token-level reasoning, and flexible inference. However, applying autoregressive modeling to continuous robot actions requires an effective action tokenization scheme. Existing approaches either rely on analytical discretization methods that produce prohibitively long token sequences, or learned latent tokenizers that lack structure, limiting their compatibility with next-token prediction. In this work, we identify three desiderata for action tokenization - high compression, total decodability, and a left-to-right causally ordered token space - and introduce Ordered Action Tokenization (OAT), a learned action tokenizer that satisfies all three. OAT discretizes action chunks into an ordered sequence of tokens using transformer with registers, finite scalar quantization, and ordering-inducing training mechanisms. The resulting token space aligns naturally with autoregressive generation and enables prefix-based detokenization, yielding an anytime trade-off between inference cost and action fidelity. Across more than 20 tasks spanning four simulation benchmarks and real-world settings, autoregressive policies equipped with OAT consistently outperform prior tokenization schemes and diffusion-based baselines, while offering significantly greater flexibility at inference time.
Problem

Research questions and friction points this paper is trying to address.

action tokenization
autoregressive policies
robot learning
discrete abstraction
causal ordering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ordered Action Tokenization
autoregressive policy
action tokenization
finite scalar quantization
causal ordering
🔎 Similar Papers
No similar papers found.
Chaoqi Liu
Chaoqi Liu
University of Illinois at Urbana-Champaign
Robotics
X
Xiaoshen Han
Harvard University
J
Jiawei Gao
Harvard University
Y
Yue Zhao
Stanford University
H
Haonan Chen
Harvard University
Yilun Du
Yilun Du
Harvard University
Artificial IntelligenceMachine LearningRoboticsComputer Vision