Adaptive Discretization for Consistency Models

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing consistency models (CMs) rely on hand-crafted discrete-time scheduling schemes, limiting generalization across diverse noise schedules and datasets. To address this, we propose the first adaptive and unified discretization framework, formulating step-size optimization as a constrained differentiable optimization problem: local consistency serves as the differentiable objective, while global consistency is enforced as a hard constraint via Lagrange multipliers; the problem is solved efficiently using the Gauss–Newton method. Our approach enables end-to-end automatic optimization of the discretization process without manual intervention, substantially improving training stability and generation efficiency. Evaluated on CIFAR-10 and ImageNet, our method accelerates model convergence and improves FID and Inception Score (IS), with negligible computational overhead. Crucially, it is fully compatible with state-of-the-art diffusion model variants, offering a general-purpose discretization solution for consistency modeling.

Technology Category

Application Category

📝 Abstract
Consistency Models (CMs) have shown promise for efficient one-step generation. However, most existing CMs rely on manually designed discretization schemes, which can cause repeated adjustments for different noise schedules and datasets. To address this, we propose a unified framework for the automatic and adaptive discretization of CMs, formulating it as an optimization problem with respect to the discretization step. Concretely, during the consistency training process, we propose using local consistency as the optimization objective to ensure trainability by avoiding excessive discretization, and taking global consistency as a constraint to ensure stability by controlling the denoising error in the training target. We establish the trade-off between local and global consistency with a Lagrange multiplier. Building on this framework, we achieve adaptive discretization for CMs using the Gauss-Newton method. We refer to our approach as ADCMs. Experiments demonstrate that ADCMs significantly improve the training efficiency of CMs, achieving superior generative performance with minimal training overhead on both CIFAR-10 and ImageNet. Moreover, ADCMs exhibit strong adaptability to more advanced DM variants. Code is available at https://github.com/rainstonee/ADCM.
Problem

Research questions and friction points this paper is trying to address.

Automating discretization schemes for Consistency Models
Optimizing local and global consistency trade-offs adaptively
Improving training efficiency across datasets and noise schedules
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically adapts discretization steps for consistency models
Uses local consistency objective with global constraint
Optimizes via Gauss-Newton method with Lagrange multiplier
🔎 Similar Papers
No similar papers found.