FunDiff: Diffusion Models over Function Spaces for Physics-Informed Generative Modeling

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of generating continuous field functions for physics-based modeling. Methodologically, it introduces the first diffusion model framework explicitly defined and trained in an infinite-dimensional function space: (i) it generalizes diffusion processes to function spaces; (ii) it designs a function autoencoder coupled with implicit diffusion, enabling evaluation at arbitrary continuous spatial locations and handling of arbitrarily discretized inputs; and (iii) it incorporates physics-constrained network architectures and physics-informed loss functions that intrinsically encode conservation laws, symmetries, and other physical priors. Theoretically, it establishes minimax-optimal guarantees for density estimation in function spaces. Experiments demonstrate high-fidelity, physically consistent continuous field generation across fluid and solid mechanics tasks, with robustness to input noise and low-resolution data. The code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advances in generative modeling -- particularly diffusion models and flow matching -- have achieved remarkable success in synthesizing discrete data such as images and videos. However, adapting these models to physical applications remains challenging, as the quantities of interest are continuous functions governed by complex physical laws. Here, we introduce $ extbf{FunDiff}$, a novel framework for generative modeling in function spaces. FunDiff combines a latent diffusion process with a function autoencoder architecture to handle input functions with varying discretizations, generate continuous functions evaluable at arbitrary locations, and seamlessly incorporate physical priors. These priors are enforced through architectural constraints or physics-informed loss functions, ensuring that generated samples satisfy fundamental physical laws. We theoretically establish minimax optimality guarantees for density estimation in function spaces, showing that diffusion-based estimators achieve optimal convergence rates under suitable regularity conditions. We demonstrate the practical effectiveness of FunDiff across diverse applications in fluid dynamics and solid mechanics. Empirical results show that our method generates physically consistent samples with high fidelity to the target distribution and exhibits robustness to noisy and low-resolution data. Code and datasets are publicly available at https://github.com/sifanexisted/fundiff.
Problem

Research questions and friction points this paper is trying to address.

Adapting diffusion models to continuous physical functions
Generating continuous functions with physical constraints
Ensuring physical consistency in generative modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent diffusion process in function spaces
Function autoencoder for varying discretizations
Physics-informed loss for physical consistency
🔎 Similar Papers
2022-09-02ACM Computing SurveysCitations: 1628
Sifan Wang
Sifan Wang
Postdoctoral fellow, Yale University
Scientific Machine LearningAI for ScienceMachine LearningDeep Learning
Zehao Dou
Zehao Dou
OpenAI
StatisticsLLMDiffusion models
T
Tong-Rui Liu
Department of Aeronautics, Imperial College London
L
Lu Lu
Department of Statistics and Data Science, Yale University