QuantVLA: Scale-Calibrated Post-Training Quantization for Vision-Language-Action Models

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational and memory costs of vision–language–action (VLA) models during deployment, which intensify with increasing model scale. The authors propose the first training-free post-training quantization framework that preserves the original architecture while enabling efficient low-bit inference through three calibration strategies: selective integer quantization of linear layers, attention temperature-matching scaling, and output head balancing. Notably, this approach achieves the first successful quantization of diffusion Transformer action heads using only a small amount of unlabeled calibration data. Evaluated on the LIBERO benchmark, the quantized models not only surpass the task success rate of their full-precision counterparts but also reduce memory consumption by approximately 70% and achieve a 1.22× end-to-end inference speedup.

Technology Category

Application Category

📝 Abstract
Vision-language-action (VLA) models unify perception, language, and control for embodied agents but face significant challenges in practical deployment due to rapidly increasing compute and memory demands, especially as models scale to longer horizons and larger backbones. To address these bottlenecks, we introduce QuantVLA, a training-free post-training quantization (PTQ) framework that, to our knowledge, is the first PTQ approach for VLA systems and the first to successfully quantize a diffusion transformer (DiT) action head. QuantVLA incorporates three scale-calibrated components: (1) a selective quantization layout that integerizes all linear layers in both the language backbone and the DiT while keeping attention projections in floating point to preserve the original operator schedule; (2) attention temperature matching, a lightweight per-head scaling mechanism that stabilizes attention logits and is folded into the dequantization scales at inference; and (3) output head balancing, a per-layer residual interface calibration that mitigates post-projection energy drift. The framework requires no additional training, uses only a small unlabeled calibration buffer, and supports integer kernels for low-bit weights and activations while leaving the architecture unchanged. Across representative VLA models on LIBERO, QuantVLA exceeds the task success rates of full-precision baselines, achieves about 70% relative memory savings on the quantized components, and delivers a 1.22x speedup in end-to-end inference latency, providing a practical pathway toward scalable low-bit embodied intelligence under strict compute, memory, and power constraints.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action Models
Post-Training Quantization
Compute Efficiency
Memory Constraints
Embodied Intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training quantization
vision-language-action models
diffusion transformer
scale calibration
low-bit inference
🔎 Similar Papers
No similar papers found.