🤖 AI Summary
This work addresses the limited performance gains of continuous diffusion models under inference-time scaling. We adapt efficient search strategies from large language models to image generation by introducing an autoregressive framework based on discrete visual tokens. Our method integrates beam search, early pruning, and computation reuse—leveraging the inherent structural advantages of discrete sequences for inference optimization. Systematic ablation studies and validator-based evaluations demonstrate that architectural design choices exert a far greater impact on inference efficiency than mere parameter count expansion. Experiments show that a 2B-parameter visual autoregressive model, enhanced with beam search, outperforms a 12B-parameter diffusion model across multiple text-to-image benchmarks—achieving superior generation quality and computational efficiency. These results validate the critical potential and feasibility of inference-time search strategies in image synthesis.
📝 Abstract
While inference-time scaling through search has revolutionized Large Language Models, translating these gains to image generation has proven difficult. Recent attempts to apply search strategies to continuous diffusion models show limited benefits, with simple random sampling often performing best. We demonstrate that the discrete, sequential nature of visual autoregressive models enables effective search for image generation. We show that beam search substantially improves text-to-image generation, enabling a 2B parameter autoregressive model to outperform a 12B parameter diffusion model across benchmarks. Systematic ablations show that this advantage comes from the discrete token space, which allows early pruning and computational reuse, and our verifier analysis highlights trade-offs between speed and reasoning capability. These findings suggest that model architecture, not just scale, is critical for inference-time optimization in visual generation.