๐ค AI Summary
This work addresses the limitation of conventional latent diffusion models, which require separate training stages for the tokenizer and the diffusion model, thereby hindering joint optimization. The authors propose UNITE, a unified architecture that leverages a weight-shared generative encoder to cast both image tokenization and latent generation as conditionally instantiated variants of a single latent inference problem, enabling end-to-end training in one stage. Notably, UNITE dispenses with adversarial losses or pretrained encoders and instead establishes a โcommon latent languageโ to co-optimize the representation space. Evaluated on ImageNet at 256ร256 resolution, UNITE achieves FID scores of 2.12 (Base) and 1.73 (Large), demonstrating performance competitive with state-of-the-art methods across both image and molecular modalities.
๐ Abstract
Latent diffusion models (LDMs) enable high-fidelity synthesis by operating in learned latent spaces. However, training state-of-the-art LDMs requires complex staging: a tokenizer must be trained first, before the diffusion model can be trained in the frozen latent space. We propose UNITE - an autoencoder architecture for unified tokenization and latent diffusion. UNITE consists of a Generative Encoder that serves as both image tokenizer and latent generator via weight sharing. Our key insight is that tokenization and generation can be viewed as the same latent inference problem under different conditioning regimes: tokenization infers latents from fully observed images, whereas generation infers them from noise together with text or class conditioning. Motivated by this, we introduce a single-stage training procedure that jointly optimizes both tasks via two forward passes through the same Generative Encoder. The shared parameters enable gradients to jointly shape the latent space, encouraging a "common latent language". Across image and molecule modalities, UNITE achieves near state of the art performance without adversarial losses or pretrained encoders (e.g., DINO), reaching FID 2.12 and 1.73 for Base and Large models on ImageNet 256 x 256. We further analyze the Generative Encoder through the lenses of representation alignment and compression. These results show that single stage joint training of tokenization & generation from scratch is feasible.