🤖 AI Summary
Visual autoregressive models (VARs) suffer from severe lack of generation diversity—producing highly repetitive images even for simple text prompts—yet existing work predominantly prioritizes image fidelity while overlooking this critical limitation. This paper introduces DiverseVAR, a plug-and-play inference-time framework that significantly enhances diversity of text-conditioned VAR generation without altering model architecture, retraining, or fine-tuning. Its core contributions are threefold: (1) text-embedding noise injection to enrich semantic-level sampling diversity; (2) a scale-transfer optimization strategy inspired by diffusion models’ “time-travel” mechanism, dynamically balancing multi-scale latent reconstruction with the diversity–quality trade-off; and (3) a multi-scale autoencoder for robust recovery of intermediate latent states. Extensive experiments demonstrate that DiverseVAR establishes a new Pareto frontier between diversity and quality across multiple benchmarks.
📝 Abstract
We introduce DiverseVAR, a framework that enhances the diversity of text-conditioned visual autoregressive models (VAR) at test time without requiring retraining, fine-tuning, or substantial computational overhead. While VAR models have recently emerged as strong competitors to diffusion and flow models for image generation, they suffer from a critical limitation in diversity, often producing nearly identical images even for simple prompts. This issue has largely gone unnoticed amid the predominant focus on image quality. We address this limitation at test time in two stages. First, inspired by diversity enhancement techniques in diffusion models, we propose injecting noise into the text embedding. This introduces a trade-off between diversity and image quality: as diversity increases, the image quality sharply declines. To preserve quality, we propose scale-travel: a novel latent refinement technique inspired by time-travel strategies in diffusion models. Specifically, we use a multi-scale autoencoder to extract coarse-scale tokens that enable us to resume generation at intermediate stages. Extensive experiments show that combining text-embedding noise injection with our scale-travel refinement significantly enhances diversity while minimizing image-quality degradation, achieving a new Pareto frontier in the diversity-quality trade-off.