🤖 AI Summary
This work addresses the significant efficiency bottleneck in the late-stage generation of visual autoregressive (VAR) models. The authors propose a three-dimensional sparsity modeling approach—spanning tokens, layers, and scales—based on attention entropy to identify and exploit fine-grained semantic sparsity patterns. Unlike conventional heuristic skipping strategies, this method dynamically leverages multi-dimensional sparsity for accelerated inference without compromising generation quality. Evaluated on Infinity-2B and Infinity-8B models, the approach achieves up to 3.4× speedup while preserving high-fidelity semantic details, substantially outperforming existing techniques.
📝 Abstract
Visual Autoregressive(VAR) models enhance generation quality but face a critical efficiency bottleneck in later stages. In this paper, we present a novel optimization framework for VAR models that fundamentally differs from prior approaches such as FastVAR and SkipVAR. Instead of relying on heuristic skipping strategies, our method leverages attention entropy to characterize the semantic projections across different dimensions of the model architecture. This enables precise identification of parameter dynamics under varying token granularity levels, semantic scopes, and generation scales. Building on this analysis, we further uncover sparsity patterns along three critical dimensions-token, layer, and scale-and propose a set of fine-grained optimization strategies tailored to these patterns. Extensive evaluation demonstrates that our approach achieves aggressive acceleration of the generation process while significantly preserving semantic fidelity and fine details, outperforming traditional methods in both efficiency and quality. Experiments on Infinity-2B and Infinity-8B models demonstrate that ToProVAR achieves up to 3.4x acceleration with minimal quality loss, effectively mitigating the issues found in prior work. Our code will be made publicly available.