🤖 AI Summary
In visual autoregressive (AR) models, low-confidence candidate tokens—arising from ambiguous token selection—lead to dense candidate sets and low acceptance rates in speculative decoding, severely limiting inference acceleration. To address this, we propose a synergistic framework comprising static tree-based draft generation and a relaxed acceptance mechanism: a fixed-depth static tree replaces dynamic trees to mitigate token ambiguity under low confidence; a probability-threshold-based relaxed acceptance strategy is introduced to improve acceptance rates for deeper tokens. Our method enables, for the first time, efficient speculative decoding over deep sequences in visual AR models while preserving image reconstruction fidelity. Evaluated on mainstream visual AR models, it achieves up to 2.56× end-to-end inference speedup with zero degradation in image quality, significantly outperforming existing speculative decoding approaches.
📝 Abstract
Speculative decoding has been widely used to accelerate autoregressive (AR) text generation. However, its effectiveness in visual AR models remains limited due to token selection ambiguity, where multiple tokens receive similarly low probabilities, reducing acceptance rates. While dynamic tree drafting has been proposed to improve speculative decoding, we show that it fails to mitigate token selection ambiguity, resulting in shallow draft trees and suboptimal acceleration. To address this, we introduce LANTERN++, a novel framework that integrates static tree drafting with a relaxed acceptance condition, allowing drafts to be selected independently of low-confidence predictions. This enables deeper accepted sequences, improving decoding efficiency while preserving image quality. Extensive experiments on state-of-the-art visual AR models demonstrate that LANTERN++ significantly accelerates inference, achieving up to $mathbf{ imes 2.56}$ speedup over standard AR decoding while maintaining high image quality.