Accelerating PDE-Constrained Optimization by the Derivative of Neural Operators

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low data efficiency and inaccurate gradients that undermine optimization stability in PDE-constrained optimization (PDECO), this work introduces three key innovations: (1) an optimization-driven active sampling and training paradigm that prioritizes gradient-sensitive regions to enhance data utilization; (2) a Virtual-Fourier Layer that explicitly models and corrects high-order derivative errors via spectral-domain regularization; and (3) a hybrid optimization framework integrating neural operators with numerical solvers to balance learning speed and numerical robustness. Experiments demonstrate substantial improvements: average relative error in derivative prediction is reduced by 37–58%; gradient-based optimization exhibits enhanced stability; convergence speed accelerates by 2.1–3.4× across diverse PDECO tasks; and overall optimization success rate increases by 22–41% compared to purely data-driven or purely numerical approaches.

Technology Category

Application Category

📝 Abstract
PDE-Constrained Optimization (PDECO) problems can be accelerated significantly by employing gradient-based methods with surrogate models like neural operators compared to traditional numerical solvers. However, this approach faces two key challenges: (1) **Data inefficiency**: Lack of efficient data sampling and effective training for neural operators, particularly for optimization purpose. (2) **Instability**: High risk of optimization derailment due to inaccurate neural operator predictions and gradients. To address these challenges, we propose a novel framework: (1) **Optimization-oriented training**: we leverage data from full steps of traditional optimization algorithms and employ a specialized training method for neural operators. (2) **Enhanced derivative learning**: We introduce a *Virtual-Fourier* layer to enhance derivative learning within the neural operator, a crucial aspect for gradient-based optimization. (3) **Hybrid optimization**: We implement a hybrid approach that integrates neural operators with numerical solvers, providing robust regularization for the optimization process. Our extensive experimental results demonstrate the effectiveness of our model in accurately learning operators and their derivatives. Furthermore, our hybrid optimization approach exhibits robust convergence.
Problem

Research questions and friction points this paper is trying to address.

Inefficient data sampling for neural operator training
Unstable optimization due to inaccurate neural predictions
Lack of robust hybrid PDE optimization methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimization-oriented training with traditional algorithm data
Enhanced derivative learning via Virtual-Fourier layer
Hybrid optimization combining neural operators and solvers
🔎 Similar Papers
No similar papers found.
Ze Cheng
Ze Cheng
Bosch Center for Artificial Intelligence, China
mathcomputer sciencemachine learning
Z
Zhuoyu Li
Bosch (China) Invest Ltd., Shanghai, China
Xiaoqiang Wang
Xiaoqiang Wang
Florida State University
Phase Field MethodsEdge-Weighted Centroidal Voronoi Tessellations
J
Jianing Huang
Bosch (China) Invest Ltd., Shanghai, China
Z
Zhizhou Zhang
Bosch (China) Invest Ltd., Shanghai, China
Zhongkai Hao
Zhongkai Hao
Tsinghua University
machine learningAI for Sciencephysics-informed machine learning
H
Hang Su
Dept. of Comp. Sci. & Techn., Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, Tsinghua University