🤖 AI Summary
Real-time prediction of high-dimensional parametric partial differential equations (PDEs) faces bottlenecks including strong data dependency, poor generalization, and high computational cost. Method: We propose an end-to-end differentiable, physics-informed latent-space operator learning framework. Its core innovation is a novel coupled dual-DeepONet architecture that intrinsically embeds physical constraints throughout latent-space learning and reconstruction, enabling few-shot training and global joint optimization. The method integrates physics-informed neural network (PINN) priors, latent-space dimensionality reduction, and dual-branch operator modeling, trained uniformly via end-to-end backpropagation. Contribution/Results: Evaluated on diverse high-dimensional PDE benchmarks, the framework achieves significantly improved generalization and prediction accuracy. It exhibits near-constant GPU memory footprint and computational complexity, reduces sample requirements by an order of magnitude, and establishes an efficient new paradigm for real-time surrogate modeling of large-scale physical systems.
📝 Abstract
Deep operator network (DeepONet) has shown great promise as a surrogate model for systems governed by partial differential equations (PDEs), learning mappings between infinite-dimensional function spaces with high accuracy. However, achieving low generalization errors often requires highly overparameterized networks, posing significant challenges for large-scale, complex systems. To address these challenges, latent DeepONet was proposed, introducing a two-step approach: first, a reduced-order model is used to learn a low-dimensional latent space, followed by operator learning on this latent space. While effective, this method is inherently data-driven, relying on large datasets and making it difficult to incorporate governing physics into the framework. Additionally, the decoupled nature of these steps prevents end-to-end optimization and the ability to handle data scarcity. This work introduces PI-Latent-NO, a physics-informed latent operator learning framework that overcomes these limitations. Our architecture employs two coupled DeepONets in an end-to-end training scheme: the first, termed Latent-DeepONet, identifies and learns the low-dimensional latent space, while the second, Reconstruction-DeepONet, maps the latent representations back to the original physical space. By integrating governing physics directly into the training process, our approach requires significantly fewer data samples while achieving high accuracy. Furthermore, the framework is computationally and memory efficient, exhibiting nearly constant scaling behavior on a single GPU and demonstrating the potential for further efficiency gains with distributed training. We validate the proposed method on high-dimensional parametric PDEs, demonstrating its effectiveness as a proof of concept and its potential scalability for large-scale systems.