BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying large-scale Vision-Language-Action (VLA) models on resource-constrained robotic platforms, this paper proposes the first fully parameterized 1-bit VLA model with {-1, 0, 1} ternary weights. Methodologically, it introduces three key innovations: (i) the first full-parameter 1-bit ternarization of VLA models; (ii) a distillation-aware training strategy that compresses the visual encoder to 1.58 bits while preserving cross-modal representation alignment; and (iii) integration of a lightweight visual encoder, knowledge distillation, and robot manipulation task fine-tuning. Evaluated on the LIBERO benchmark, the proposed model achieves performance comparable to the 4-bit quantized OpenVLA-OFT, yet reduces memory footprint to just 29.8% of the latter’s—substantially enhancing feasibility for edge deployment.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have shown impressive capabilities across a wide range of robotics manipulation tasks. However, their growing model size poses significant challenges for deployment on resource-constrained robotic systems. While 1-bit pretraining has proven effective for enhancing the inference efficiency of large language models with minimal performance loss, its application to VLA models remains underexplored. In this work, we present BitVLA, the first 1-bit VLA model for robotics manipulation, in which every parameter is ternary, i.e., {-1, 0, 1}. To further reduce the memory footprint of the vision encoder, we propose the distillation-aware training strategy that compresses the full-precision encoder to 1.58-bit weights. During this process, a full-precision encoder serves as a teacher model to better align latent representations. Despite the lack of large-scale robotics pretraining, BitVLA achieves performance comparable to the state-of-the-art model OpenVLA-OFT with 4-bit post-training quantization on the LIBERO benchmark, while consuming only 29.8% of the memory. These results highlight BitVLA's promise for deployment on memory-constrained edge devices. We release the code and model weights in https://github.com/ustcwhy/BitVLA.
Problem

Research questions and friction points this paper is trying to address.

Reducing VLA model size for resource-constrained robotics
Applying 1-bit pretraining to Vision-Language-Action models
Compressing vision encoder memory footprint efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

1-bit ternary VLA model for robotics
Distillation-aware training for vision encoder
Memory-efficient deployment on edge devices
H
Hongyu Wang
Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
C
Chuyan Xiong
Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Ruiping Wang
Ruiping Wang
Professor, Institute of Computing Technology, Chinese Academy of Sciences
Computer VisionPattern RecognitionMachine Learning
X
Xilin Chen
University of Chinese Academy of Sciences