StepVAR: Structure-Texture Guided Pruning for Visual Autoregressive Models

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the quadratic growth in inference cost of Vision Autoregressive (VAR) models at high resolutions and the semantic degradation caused by existing pruning methods that neglect structural consistency. The authors propose a training-free token pruning framework that, for the first time, integrates dual criteria of structure and texture to accelerate VAR models. Token importance is jointly assessed using a lightweight high-pass filter and principal component analysis, while dense feature maps are reconstructed via nearest-neighbor feature propagation to preserve multi-scale generation quality. Evaluated on state-of-the-art text-to-image and text-to-video VAR models, the method significantly improves inference efficiency and outperforms existing acceleration approaches in both quantitative metrics and qualitative results, all while maintaining high-fidelity generation.

Technology Category

Application Category

📝 Abstract
Visual AutoRegressive (VAR) models based on next-scale prediction enable efficient hierarchical generation, yet the inference cost grows quadratically at high resolutions. We observe that the computationally intensive later scales predominantly refine high-frequency textures and exhibit substantial spatial redundancy, in contrast to earlier scales that determine the global structural layout. Existing pruning methods primarily focus on high-frequency detection for token selection, often overlooking structural coherence and consequently degrading global semantics. To address this limitation, we propose StepVAR, a training-free token pruning framework that accelerates VAR inference by jointly considering structural and textural importance. Specifically, we employ a lightweight high-pass filter to capture local texture details, while leveraging Principal Component Analysis (PCA) to preserve global structural information. This dual-criterion design enables the model to retain tokens critical for both fine-grained fidelity and overall composition. To maintain valid next-scale prediction under sparse tokens, we further introduce a nearest neighbor feature propagation strategy to reconstruct dense feature maps from pruned representations. Extensive experiments on state-of-the-art text-to-image and text-to-video VAR models demonstrate that StepVAR achieves substantial inference speedups while maintaining generation quality. Quantitative and qualitative evaluations consistently show that our method outperforms existing acceleration approaches, validating its effectiveness and general applicability across diverse VAR architectures.
Problem

Research questions and friction points this paper is trying to address.

Visual Autoregressive Models
Inference Acceleration
Token Pruning
Structural Coherence
High-Frequency Texture
Innovation

Methods, ideas, or system contributions that make the work stand out.

structure-texture guided pruning
visual autoregressive models
training-free acceleration
PCA-based structural preservation
nearest neighbor feature propagation
🔎 Similar Papers
No similar papers found.