🤖 AI Summary
Diffusion models struggle to generate samples satisfying scientific constraints, and existing regularization or sampling-guidance approaches often induce distributional shift—especially under misspecified constraints, leading to poor robustness. This paper proposes a novel paradigm that neither modifies the training objective nor alters the sampling procedure: it internalizes soft constraint priors as inductive biases within the denoiser, achieved via differentiable constraint projection, adaptive weight tuning, and implicit gradient guidance—embedding constraint preferences directly into the network architecture. The design ensures both high constraint satisfaction and robustness to constraint misspecification while preserving the original data distribution. Experiments across multiple scientific modeling tasks demonstrate an average 32% improvement in constraint satisfaction rate, a negligible FID degradation of less than 0.5, and significantly superior data fidelity compared to baselines.
📝 Abstract
Diffusion models struggle to produce samples that respect constraints, a common requirement in scientific applications. Recent approaches have introduced regularization terms in the loss or guidance methods during sampling to enforce such constraints, but they bias the generative model away from the true data distribution. This is a problem, especially when the constraint is misspecified, a common issue when formulating constraints on scientific data. In this paper, instead of changing the loss or the sampling loop, we integrate a guidance-inspired adjustment into the denoiser itself, giving it a soft inductive bias towards constraint-compliant samples. We show that these softly constrained denoisers exploit constraint knowledge to improve compliance over standard denoisers, and maintain enough flexibility to deviate from it when there is misspecification with observed data.