🤖 AI Summary
This work addresses the challenge of precise control in conditional discrete generative models when confronted with unseen condition combinations. The authors propose a theory-driven, composable discrete generation framework that integrates parallel token prediction with an absorbing diffusion mechanism and a concept-weighted conditional fusion strategy. This approach enables accurate modeling of an arbitrary number and combination of conditions while unifying mask-based generation within the same paradigm. Leveraging compositional vocabularies derived from VQ-VAE/VQ-GAN, the method achieves a 63.4% average reduction in error rate, a 9.58 improvement in FID, and 2.3–12× faster inference across three datasets. Furthermore, it successfully extends to pretrained text-to-image models, enabling fine-grained controllable generation.
📝 Abstract
Conditional discrete generative models struggle to faithfully compose multiple input conditions. To address this, we derive a theoretically-grounded formulation for composing discrete probabilistic generative processes, with masked generation (absorbing diffusion) as a special case. Our formulation enables precise specification of novel combinations and numbers of input conditions that lie outside the training data, with concept weighting enabling emphasis or negation of individual conditions. In synergy with the richly compositional learned vocabulary of VQ-VAE and VQ-GAN, our method attains a $63.4\%$ relative reduction in error rate compared to the previous state-of-the-art, averaged across 3 datasets (positional CLEVR, relational CLEVR and FFHQ), simultaneously obtaining an average absolute FID improvement of $-9.58$. Meanwhile, our method offers a $2.3\times$ to $12\times$ real-time speed-up over comparable methods, and is readily applied to an open pre-trained discrete text-to-image model for fine-grained control of text-to-image generation.