🤖 AI Summary
This work addresses the instability and degenerate solutions commonly observed when training consistency models from scratch, a phenomenon lacking a unified theoretical explanation in prior literature. By introducing a flow-mapping perspective, we systematically analyze the training dynamics of consistency models and reveal that instability arises from gradient explosion and degenerate trajectories. Building on this insight, we reformulate the self-distillation mechanism to explicitly control gradient norms, thereby enabling stable optimization without reliance on pretrained diffusion models. Our approach significantly improves both convergence and performance across diverse tasks, including image generation and diffusion policy learning, offering a theoretically grounded and practically effective advancement for consistency training.
📝 Abstract
Consistency models have been proposed for fast generative modeling, achieving results competitive with diffusion and flow models. However, these methods exhibit inherent instability and limited reproducibility when training from scratch, motivating subsequent work to explain and stabilize these issues. While these efforts have provided valuable insights, the explanations remain fragmented, and the theoretical relationships remain unclear. In this work, we provide a theoretical examination of consistency models by analyzing them from a flow map-based perspective. This joint analysis clarifies how training stability and convergence behavior can give rise to degenerate solutions. Building on these insights, we revisit self-distillation as a practical remedy for certain forms of suboptimal convergence and reformulate it to avoid excessive gradient norms for stable optimization. We further demonstrate that our strategy extends beyond image generation to diffusion-based policy learning, without reliance on a pretrained diffusion model for initialization, thereby illustrating its broader applicability.