🤖 AI Summary
To address the poor cross-modal alignment and low efficiency of conventional speculative decoding in vision-language models (VLMs), this paper proposes the first multimodal speculative decoding framework tailored for VLMs. Our method introduces three key innovations: (1) a cross-attention feature injection mechanism that enables fine-grained alignment between visual and linguistic representations; (2) an adaptive intermediate feature selection strategy based on attention entropy, dynamically identifying high-information draft layers; and (3) a lightweight visual token compression module to reduce draft generation overhead. The framework requires no architectural modifications to the base VLM and is compatible with mainstream models—including LLaVA, Pixtral, SmolVLM, and Gemma-3. Empirical evaluation demonstrates up to a 3.6× throughput improvement and a 2.1× increase in average accepted draft length, significantly accelerating autoregressive inference while preserving output quality.
📝 Abstract
Speculative decoding (SD) has emerged as a powerful method for accelerating autoregressive generation in large language models (LLMs), yet its integration into vision-language models (VLMs) remains underexplored. We introduce DREAM, a novel speculative decoding framework tailored for VLMs that combines three key innovations: (1) a cross-attention-based mechanism to inject intermediate features from the target model into the draft model for improved alignment, (2) adaptive intermediate feature selection based on attention entropy to guide efficient draft model training, and (3) visual token compression to reduce draft model latency. DREAM enables efficient, accurate, and parallel multimodal decoding with significant throughput improvement. Experiments across a diverse set of recent popular VLMs, including LLaVA, Pixtral, SmolVLM and Gemma3, demonstrate up to 3.6x speedup over conventional decoding and significantly outperform prior SD baselines in both inference throughput and speculative draft acceptance length across a broad range of multimodal benchmarks. The code is publicly available at: https://github.com/SAI-Lab-NYU/DREAM.git