From Uncertain to Safe: Conformal Fine-Tuning of Diffusion Models for Safe PDE Control

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the lack of safety guarantees in PDE-constrained control, this paper proposes SafeDiffCon: the first framework to integrate conformal prediction into diffusion-model-based control, enabling uncertainty-aware dynamic inference-time fine-tuning and reweighted diffusion loss optimization. It jointly optimizes safety constraint satisfaction and control performance during both post-training and inference stages. By unifying iterative target-conditioned guidance, online inference-time adaptation, and rigorous uncertainty quantification, SafeDiffCon ensures optimal control under strict safety bounds. Evaluated on three canonical benchmarks—1D Burgers equation, 2D incompressible fluid flow, and controlled nuclear fusion—SafeDiffCon is the only method that fully satisfies all safety constraints while significantly outperforming classical and state-of-the-art deep learning baselines in control performance.

Technology Category

Application Category

📝 Abstract
The application of deep learning for partial differential equation (PDE)-constrained control is gaining increasing attention. However, existing methods rarely consider safety requirements crucial in real-world applications. To address this limitation, we propose Safe Diffusion Models for PDE Control (SafeDiffCon), which introduce the uncertainty quantile as model uncertainty quantification to achieve optimal control under safety constraints through both post-training and inference phases. Firstly, our approach post-trains a pre-trained diffusion model to generate control sequences that better satisfy safety constraints while achieving improved control objectives via a reweighted diffusion loss, which incorporates the uncertainty quantile estimated using conformal prediction. Secondly, during inference, the diffusion model dynamically adjusts both its generation process and parameters through iterative guidance and fine-tuning, conditioned on control targets while simultaneously integrating the estimated uncertainty quantile. We evaluate SafeDiffCon on three control tasks: 1D Burgers' equation, 2D incompressible fluid, and controlled nuclear fusion problem. Results demonstrate that SafeDiffCon is the only method that satisfies all safety constraints, whereas other classical and deep learning baselines fail. Furthermore, while adhering to safety constraints, SafeDiffCon achieves the best control performance.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety in PDE control
Integrating uncertainty quantile in models
Dynamic adjustment during model inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal prediction for uncertainty
Reweighted diffusion loss integration
Dynamic model adjustment via fine-tuning
🔎 Similar Papers