🤖 AI Summary
Latent diffusion models face a trade-off between reconstruction fidelity and generation quality in high-dimensional latent spaces: encoders inadequately represent high-frequency information, leading to insufficient exposure and underfitting of high-frequency components during diffusion training. This mismatch stems from asymmetric high-frequency response characteristics between encoder and decoder. To address this, we propose FreqWarm—a frequency-aware curriculum learning strategy that progressively amplifies high-frequency latent signal exposure during early-stage diffusion training, without modifying or retraining the autoencoder, enabling plug-and-play optimization. We validate our approach via frequency-domain decomposition and joint perturbation analysis in both RGB and latent spaces. Quantitatively, FreqWarm improves generative fidelity—measured by gFID—by 14.11, 6.13, and 4.42 on Wan2.2-VAE, LTX-VAE, and DC-AE-f32, respectively. The method demonstrates strong generalizability across diverse VAE architectures.
📝 Abstract
Latent diffusion has become the default paradigm for visual generation, yet we observe a persistent reconstruction-generation trade-off as latent dimensionality increases: higher-capacity autoencoders improve reconstruction fidelity but generation quality eventually declines. We trace this gap to the different behaviors in high-frequency encoding and decoding. Through controlled perturbations in both RGB and latent domains, we analyze encoder/decoder behaviors and find that decoders depend strongly on high-frequency latent components to recover details, whereas encoders under-represent high-frequency contents, yielding insufficient exposure and underfitting in high-frequency bands for diffusion model training. To address this issue, we introduce FreqWarm, a plug-and-play frequency warm-up curriculum that increases early-stage exposure to high-frequency latent signals during diffusion or flow-matching training -- without modifying or retraining the autoencoder. Applied across several high-dimensional autoencoders, FreqWarm consistently improves generation quality: decreasing gFID by 14.11 on Wan2.2-VAE, 6.13 on LTX-VAE, and 4.42 on DC-AE-f32, while remaining architecture-agnostic and compatible with diverse backbones. Our study shows that explicitly managing frequency exposure can successfully turn high-dimensional latent spaces into more diffusible targets.