SP-VLA: A Joint Model Scheduling and Token Pruning Approach for VLA Model Acceleration

๐Ÿ“… 2025-06-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Vision-Language-Action (VLA) models suffer from poor real-time performance due to high computational overhead and low inference frequency; existing acceleration methods overlook inherent temporal redundancy (redundant action generation in sequential decision-making) and spatial redundancy (redundant visual input). Method: We propose a spatio-temporal joint sparsification framework: (i) a novel action-type-driven dynamic co-scheduling mechanism between the VLA model and a lightweight generator; and (ii) a vision-semantic dual-aware token importance assessment and pruning methodโ€”enabling the first-ever joint temporal and spatial sparsification of VLA models. Key techniques include action-aware scheduling, lightweight generator distillation, dual-dimensional token classification, and dynamic pruning. Results: Experiments across multiple tasks achieve up to 1.5ร— speedup with <3% accuracy degradation, significantly outperforming state-of-the-art VLA acceleration approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-Language-Action (VLA) models have attracted increasing attention for their strong control capabilities. However, their high computational cost and low execution frequency hinder their suitability for real-time tasks such as robotic manipulation and autonomous navigation. Existing VLA acceleration methods primarily focus on structural optimization, overlooking the fact that these models operate in sequential decision-making environments. As a result, temporal redundancy in sequential action generation and spatial redundancy in visual input remain unaddressed. To this end, we propose SP-VLA, a unified framework that accelerates VLA models by jointly scheduling models and pruning tokens. Specifically, we design an action-aware model scheduling mechanism that reduces temporal redundancy by dynamically switching between VLA model and a lightweight generator. Inspired by the human motion pattern of focusing on key decision points while relying on intuition for other actions, we categorize VLA actions into deliberative and intuitive, assigning the former to the VLA model and the latter to the lightweight generator, enabling frequency-adaptive execution through collaborative model scheduling. To address spatial redundancy, we further develop a spatio-semantic dual-aware token pruning method. Tokens are classified into spatial and semantic types and pruned based on their dual-aware importance to accelerate VLA inference. These two mechanisms work jointly to guide the VLA in focusing on critical actions and salient visual information, achieving effective acceleration while maintaining high accuracy. Experimental results demonstrate that our method achieves up to 1.5$ imes$ acceleration with less than 3% drop in accuracy, outperforming existing approaches in multiple tasks.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational cost of VLA models for real-time tasks
Addresses temporal redundancy in sequential action generation
Mitigates spatial redundancy in visual input processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action-aware model scheduling reduces temporal redundancy
Spatio-semantic token pruning minimizes spatial redundancy
Dual-aware token classification accelerates VLA inference
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Ye Li
Tsinghua University
Y
Yuan Meng
Tsinghua University
Z
Zewen Sun
Tsinghua University
K
Kangye Ji
Tsinghua University
C
Chen Tang
The Chinese University of Hong Kong
Jiajun Fan
Jiajun Fan
CS Ph.D. of University of Illinois Urbana-Champaign
Reinforcement LearningMachine Learning
Xinzhu Ma
Xinzhu Ma
Associate Professor, Beihang University
deep learningcomputer vision3D scene understandingai4science
S
Shutao Xia
Tsinghua University
Z
Zhi Wang
Tsinghua University
Wenwu Zhu
Wenwu Zhu
Professor, Computer Science, Tsinghua Univerisity
Multimedia ComputingNetwork Representation Learning