🤖 AI Summary
To address underutilization of early layers and optimization instability in large-scale GAN training, this paper proposes GAT—a novel GAN paradigm integrating a VAE-derived compact latent space with a pure Transformer architecture for both generator and discriminator. Methodologically, we introduce a lightweight intermediate supervision mechanism and a width-aware learning rate scheduling strategy, enabling stable end-to-end training across model scales from S to XL. On ImageNet-256, GAT-XL/2 achieves a state-of-the-art FID of 2.96 in only 40 epochs—the best single-step generation performance to date—while improving training efficiency by 6× over strong baselines. Our core contributions are threefold: (i) the first unified modeling of pure-Transformer GANs with VAE-based latent-space compression; (ii) a structural-aware optimization strategy that overcomes scalability bottlenecks; and (iii) significant gains in both generation quality and training scalability.
📝 Abstract
Scalability has driven recent advances in generative modeling, yet its principles remain underexplored for adversarial learning. We investigate the scalability of Generative Adversarial Networks (GANs) through two design choices that have proven to be effective in other types of generative models: training in a compact Variational Autoencoder latent space and adopting purely transformer-based generators and discriminators. Training in latent space enables efficient computation while preserving perceptual fidelity, and this efficiency pairs naturally with plain transformers, whose performance scales with computational budget. Building on these choices, we analyze failure modes that emerge when naively scaling GANs. Specifically, we find issues as underutilization of early layers in the generator and optimization instability as the network scales. Accordingly, we provide simple and scale-friendly solutions as lightweight intermediate supervision and width-aware learning-rate adjustment. Our experiments show that GAT, a purely transformer-based and latent-space GANs, can be easily trained reliably across a wide range of capacities (S through XL). Moreover, GAT-XL/2 achieves state-of-the-art single-step, class-conditional generation performance (FID of 2.96) on ImageNet-256 in just 40 epochs, 6x fewer epochs than strong baselines.