FastSTAR: Spatiotemporal Token Pruning for Efficient Autoregressive Video Synthesis

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the severe computational bottleneck in STAR-based video generation caused by "token explosion" at high resolutions and long sequence lengths. The authors propose FastSTAR, a training-free acceleration framework that introduces, for the first time, a dual-dimensional token importance evaluation mechanism integrating spatial structural convergence and temporal motion trajectories. By performing spatiotemporal token pruning to identify critical regions and employing a local update strategy that skips redundant computations—refining only non-converged areas—the method achieves significant efficiency gains. Evaluated on InfinityStar, FastSTAR attains up to a 2.01× speedup with a PSNR of 28.29, incurring less than 1% performance degradation, thereby substantially improving both generation efficiency and output quality.

Technology Category

Application Category

📝 Abstract
Visual Autoregressive modeling (VAR) has emerged as a highly efficient alternative to diffusion-based frameworks, achieving comparable synthesis quality. However, as this paradigm extends to Spacetime Autoregressive modeling (STAR) for video generation, scaling resolution and frame counts leads to a"token explosion"that creates a massive computational bottleneck in the final refinement stages. To address this, we propose FastSTAR, a training-free acceleration framework designed for high-quality video generation. Our core method, Spatiotemporal Token Pruning, identifies essential tokens by integrating two specialized terms: (1) Spatial similarity, which evaluates structural convergence across hierarchical scales to skip computations in regions where further refinement becomes redundant, and (2) Temporal similarity, which identifies active motion trajectories by assessing feature-level variations relative to the preceding clip. Combined with a Partial Update mechanism, FastSTAR ensures that only non-converged regions are refined, maintaining fluid motion while bypassing redundant computations. Experimental results on InfinityStar demonstrate that FastSTAR achieves up to a 2.01x speedup with a PSNR of 28.29 and less than 1% performance degradation, proving a superior efficiency-quality trade-off for STAR-based video synthesis.
Problem

Research questions and friction points this paper is trying to address.

token explosion
spatiotemporal autoregressive modeling
video synthesis
computational bottleneck
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatiotemporal Token Pruning
Autoregressive Video Synthesis
Training-free Acceleration
Token Explosion Mitigation
Partial Update Mechanism
🔎 Similar Papers
No similar papers found.
S
Sungwoong Yune
Korea Advanced Institute of Science and Technology
S
Suheon Jeong
Korea Advanced Institute of Science and Technology
Joo-Young Kim
Joo-Young Kim
KAIST
Computer ArchitectureAI AcceleratorSystem-on-ChipFPGA