FAST: Efficient Action Tokenization for Vision-Language-Action Models

πŸ“… 2025-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Conventional per-dimension, per-frame binning-based action tokenization fails for vision-language-action (VLA) models in high-frequency, dexterous robotic tasks, leading to poor reconstruction of continuous actions from discrete token predictions. Method: We propose Frequency-domain Adaptive Spatio-Temporal tokenization (FAST), which leverages discrete cosine transform (DCT) and compressed sensing to project action sequences into a low-dimensional frequency domain and discretize themβ€”thereby overcoming temporal-domain binning limitations. We further introduce FAST+, a universal black-box tokenizer enabling unified encoding of multimodal and multi-frequency action signals. Results: Trained on 1 million real-world robot trajectories (10k hours), an autoregressive Transformer VLA model using FAST+ matches diffusion-based methods in performance while achieving up to 5Γ— faster training convergence.

Technology Category

Application Category

πŸ“ Abstract
Autoregressive sequence models, such as Transformer-based vision-language action (VLA) policies, can be tremendously effective for capturing complex and generalizable robotic behaviors. However, such models require us to choose a tokenization of our continuous action signals, which determines how the discrete symbols predicted by the model map to continuous robot actions. We find that current approaches for robot action tokenization, based on simple per-dimension, per-timestep binning schemes, typically perform poorly when learning dexterous skills from high-frequency robot data. To address this challenge, we propose a new compression-based tokenization scheme for robot actions, based on the discrete cosine transform. Our tokenization approach, Frequency-space Action Sequence Tokenization (FAST), enables us to train autoregressive VLAs for highly dexterous and high-frequency tasks where standard discretization methods fail completely. Based on FAST, we release FAST+, a universal robot action tokenizer, trained on 1M real robot action trajectories. It can be used as a black-box tokenizer for a wide range of robot action sequences, with diverse action spaces and control frequencies. Finally, we show that, when combined with the pi0 VLA, our method can scale to training on 10k hours of robot data and match the performance of diffusion VLAs, while reducing training time by up to 5x.
Problem

Research questions and friction points this paper is trying to address.

Transformer-based Models
Visual Language Action (VLA) Policies
High-Difficulty Robotic Movements
Innovation

Methods, ideas, or system contributions that make the work stand out.

FAST
Discrete Cosine Transform
Visual Language Action Modeling
πŸ”Ž Similar Papers
No similar papers found.