π€ AI Summary
Diffusion models suffer from high inference latency, hindering real-time text-to-image generation. To address this, we propose Stochastic Consistency Distillation (SCott), the first method to embed a stochastic differential equation (SDE) solver into the consistency distillation framework. SCott jointly optimizes noise scheduling and sampling steps while introducing an adversarial loss to enhance sample fidelity under extreme acceleration (1β2 steps). Distilled from Stable Diffusion-V1.5, SCott generates high-fidelity images in just 1β2 steps, achieving an FID of 22.1 on MSCOCO-2017βsurpassing 1-step InstaFlow (23.4) and matching 4-step UFOGen. It also improves diversity in high-resolution generation by 16%. Key innovations include: (i) SDE-driven consistency modeling, (ii) joint noise-step control, and (iii) adversarial regularization for ultra-low-step synthesis.
π Abstract
The iterative sampling procedure employed by diffusion models (DMs) often leads to significant inference latency. To address this, we propose Stochastic Consistency Distillation (SCott) to enable accelerated text-to-image generation, where high-quality generations can be achieved with just 1-2 sampling steps, and further improvements can be obtained by adding additional steps. In contrast to vanilla consistency distillation (CD) which distills the ordinary differential equation solvers-based sampling process of a pretrained teacher model into a student, SCott explores the possibility and validates the efficacy of integrating stochastic differential equation (SDE) solvers into CD to fully unleash the potential of the teacher. SCott is augmented with elaborate strategies to control the noise strength and sampling process of the SDE solver. An adversarial loss is further incorporated to strengthen the sample quality with rare sampling steps. Empirically, on the MSCOCO-2017 5K dataset with a Stable Diffusion-V1.5 teacher, SCott achieves an FID (Frechet Inceptio Distance) of 22.1, surpassing that (23.4) of the 1-step InstaFlow (Liu et al., 2023) and matching that of 4-step UFOGen (Xue et al., 2023b). Moreover, SCott can yield more diverse samples than other consistency models for high-resolution image generation (Luo et al., 2023a), with up to 16% improvement in a qualified metric. The code and checkpoints are coming soon.