🤖 AI Summary
Diffusion models (DMs) achieve high-quality generation but suffer from slow inference, requiring dozens to thousands of iterative steps; consistency trajectory models (CTMs) enable single-step ODE solving yet are restricted to irreversible Gaussian-noise-to-data mappings. To address these limitations, we propose Generalized Consistency Trajectory Models (GCTMs), the first framework generalizing consistency modeling to *reversible* ODE transformations between *arbitrary* source and target distributions. GCTMs unify diverse tasks—including image-to-image translation, inpainting, and editing—under a single generative paradigm. Methodologically, we introduce a generalized consistency condition, an explicit guidance mechanism, and a distribution alignment constraint to enable direct trajectory generation between arbitrary time points. Experiments demonstrate that GCTMs match the generation quality of multi-step DMs on inpainting, style transfer, and super-resolution, while accelerating inference by 10–100× and enabling fine-grained, controllable editing.
📝 Abstract
Diffusion models (DMs) excel in unconditional generation, as well as on applications such as image editing and restoration. The success of DMs lies in the iterative nature of diffusion: diffusion breaks down the complex process of mapping noise to data into a sequence of simple denoising tasks. Moreover, we are able to exert fine-grained control over the generation process by injecting guidance terms into each denoising step. However, the iterative process is also computationally intensive, often taking from tens up to thousands of function evaluations. Although consistency trajectory models (CTMs) enable traversal between any time points along the probability flow ODE (PFODE) and score inference with a single function evaluation, CTMs only allow translation from Gaussian noise to data. This work aims to unlock the full potential of CTMs by proposing generalized CTMs (GCTMs), which translate between arbitrary distributions via ODEs. We discuss the design space of GCTMs and demonstrate their efficacy in various image manipulation tasks such as image-to-image translation, restoration, and editing.