🤖 AI Summary
Existing dynamic-resolution diffusion sampling methods rely on heuristic re-noising when switching resolutions, which disrupts cross-stage consistency and leads to error accumulation and artifacts due to abrupt upsampling. This work proposes Fresco, the first framework enabling structurally consistent dynamic-resolution generation. Fresco introduces a unified re-noising mechanism, progressive upsampling, and a local convergence-aware strategy to align all stages toward a common target, thereby avoiding redundant learning of global structure. Built upon a diffusion Transformer architecture, Fresco achieves a 10× speedup on FLUX and a 5× speedup on HunyuanVideo; when combined with distillation techniques, it attains up to a 22× acceleration with negligible quality degradation.
📝 Abstract
Diffusion Transformers achieve impressive generative quality but remain computationally expensive due to iterative sampling. Recently, dynamic resolution sampling has emerged as a promising acceleration technique by reducing the resolution of early sampling steps. However, existing methods rely on heuristic re-noising at every resolution transition, injecting noise that breaks cross-stage consistency and forces the model to relearn global structure. In addition, these methods indiscriminately upsample the entire latent space at once without checking which regions have actually converged, causing accumulated errors, and visible artifacts. Therefore, we propose \textbf{Fresco}, a dynamic resolution framework that unifies re-noise and global structure across stages with progressive upsampling, preserving both the efficiency of low-resolution drafting and the fidelity of high-resolution refinement, with all stages aligned toward the same final target. Fresco achieves near-lossless acceleration across diverse domains and models, including 10$\times$ speedup on FLUX, and 5$\times$ on HunyuanVideo, while remaining orthogonal to distillation, quantization and feature caching, reaching 22$\times$ speedup when combined with distilled models. Our code is in supplementary material and will be released on Github.