🤖 AI Summary
Autoregressive vision generation models struggle to capture global image structure due to their reliance solely on local next-token supervision, resulting in limited generation quality and slow convergence. This work proposes Mirai, a framework that systematically integrates both explicit and implicit “foresight”—i.e., future token information—into autoregressive training for the first time. Mirai-E extracts multi-position future signals from unidirectional representations, while Mirai-I injects future context via bidirectional representation matching. Without increasing inference cost or modifying the model architecture, Mirai substantially enhances the model’s ability to learn two-dimensional causal structures. On class-conditional ImageNet generation, the approach reduces the FID of LlamaGen-B from 5.34 to 4.34 and accelerates convergence by up to 10×.
📝 Abstract
Autoregressive (AR) visual generators model images as sequences of discrete tokens and are trained with next token likelihood. This strict causality supervision optimizes each step only by its immediate next token, which diminishes global coherence and slows convergence. We ask whether foresight, training signals that originate from later tokens, can help AR visual generation. We conduct a series of controlled diagnostics along the injection level, foresight layout, and foresight source axes, unveiling a key insight: aligning foresight to AR models'internal representation on the 2D image grids improves causality modeling. We formulate this insight with Mirai (meaning"future"in Japanese), a general framework that injects future information into AR training with no architecture change and no extra inference overhead: Mirai-E uses explicit foresight from multiple future positions of unidirectional representations, whereas Mirai-I leverages implicit foresight from matched bidirectional representations. Extensive experiments show that Mirai significantly accelerates convergence and improves generation quality. For instance, Mirai can speed up LlamaGen-B's convergence by up to 10$\times$ and reduce the generation FID from 5.34 to 4.34 on the ImageNet class-condition image generation benchmark. Our study highlights that visual autoregressive models need foresight.