🤖 AI Summary
Diffusion models rely on large-scale datasets, risking memorization of sensitive information and severe privacy violations. To address this, we propose Online SFBD—a privacy-preserving training paradigm that eliminates explicit denoising iterations and fine-tuning loops. We reformulate Score-Free Backward Dynamics (SFBD) as a continuous dynamical system (SFBD Flow), establishing its theoretical equivalence to consistency-based methods and enabling end-to-end differentiable optimization. Our approach integrates continuous optimization, alternating projection modeling, consistency regularization, and online noise calibration. It consistently outperforms strong baselines across multiple benchmarks, achieves faster convergence, and requires no manual hyperparameter tuning. Key contributions are: (1) a theoretical unification of SFBD with consistency learning, revealing their intrinsic connection; and (2) a practical framework for efficient, robust, and low-privacy-risk diffusion model training.
📝 Abstract
Diffusion models achieve strong generative performance but often rely on large datasets that may include sensitive content. This challenge is compounded by the models' tendency to memorize training data, raising privacy concerns. SFBD (Lu et al., 2025) addresses this by training on corrupted data and using limited clean samples to capture local structure and improve convergence. However, its iterative denoising and fine-tuning loop requires manual coordination, making it burdensome to implement. We reinterpret SFBD as an alternating projection algorithm and introduce a continuous variant, SFBD flow, that removes the need for alternating steps. We further show its connection to consistency constraint-based methods, and demonstrate that its practical instantiation, Online SFBD, consistently outperforms strong baselines across benchmarks.