PIRF: Physics-Informed Reward Fine-Tuning for Diffusion Models

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models in scientific generation often violate physical laws. This paper formulates physics-constrained generation as a sparse-reward optimization problem and proposes Physics-guided Inverse Reinforcement (PIRF): instead of approximating value functions, PIRF performs gradient backpropagation directly through trajectory-level physics rewards; it further introduces hierarchical truncation of backpropagation and weight regularization to unify the reward paradigm and enhance both training stability and inference efficiency. Evaluated on five PDE benchmarks, PIRF achieves state-of-the-art performance in both physical consistency—e.g., conservation-law satisfaction rate—and sampling efficiency—accelerating inference by 2.1–3.8× over existing methods. By enabling end-to-end differentiable, proxy-free, and computationally efficient physics-driven generation, PIRF establishes a novel paradigm for integrating hard physical constraints into generative modeling.

Technology Category

Application Category

📝 Abstract
Diffusion models have demonstrated strong generative capabilities across scientific domains, but often produce outputs that violate physical laws. We propose a new perspective by framing physics-informed generation as a sparse reward optimization problem, where adherence to physical constraints is treated as a reward signal. This formulation unifies prior approaches under a reward-based paradigm and reveals a shared bottleneck: reliance on diffusion posterior sampling (DPS)-style value function approximations, which introduce non-negligible errors and lead to training instability and inference inefficiency. To overcome this, we introduce Physics-Informed Reward Fine-tuning (PIRF), a method that bypasses value approximation by computing trajectory-level rewards and backpropagating their gradients directly. However, a naive implementation suffers from low sample efficiency and compromised data fidelity. PIRF mitigates these issues through two key strategies: (1) a layer-wise truncated backpropagation method that leverages the spatiotemporally localized nature of physics-based rewards, and (2) a weight-based regularization scheme that improves efficiency over traditional distillation-based methods. Across five PDE benchmarks, PIRF consistently achieves superior physical enforcement under efficient sampling regimes, highlighting the potential of reward fine-tuning for advancing scientific generative modeling.
Problem

Research questions and friction points this paper is trying to address.

Diffusion models often violate physical laws in scientific generation
Existing methods suffer from approximation errors causing training instability
Naive reward optimization compromises sample efficiency and data fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bypasses value approximation via trajectory-level rewards
Uses layer-wise truncated backpropagation for efficiency
Implements weight-based regularization over distillation methods
🔎 Similar Papers
2024-07-16arXiv.orgCitations: 2