🤖 AI Summary
To address the issues of excessive batch size, poor generalization, high memory consumption, and prolonged training time arising from uniform grid sampling in DeepONet training, this work proposes— for the first time—the integration of stochastic sampling directly into the trunk network’s input layer, replacing conventional fixed-grid sampling. In each training iteration, input points are dynamically sampled from varying spatial locations, thereby substantially reducing per-iteration batch size while enhancing robustness to functional distribution shifts. Theoretical analysis and experiments on three benchmark PDE tasks demonstrate that the proposed method achieves comparable or slightly improved test accuracy, yet reduces training time by 30–50% and GPU memory usage by 40–60%. Moreover, it significantly improves generalization capability and noise robustness. The core contribution lies in the first deep coupling of stochastic sampling with the DeepONet architecture, enabling simultaneous optimization of accuracy, computational efficiency, and generalizability.
📝 Abstract
Neural operators (NOs) employ deep neural networks to learn mappings between infinite-dimensional function spaces. Deep operator network (DeepONet), a popular NO architecture, has demonstrated success in the real-time prediction of complex dynamics across various scientific and engineering applications. In this work, we introduce a random sampling technique to be adopted during the training of DeepONet, aimed at improving the generalization ability of the model, while significantly reducing the computational time. The proposed approach targets the trunk network of the DeepONet model that outputs the basis functions corresponding to the spatiotemporal locations of the bounded domain on which the physical system is defined. While constructing the loss function, DeepONet training traditionally considers a uniform grid of spatiotemporal points at which all the output functions are evaluated for each iteration. This approach leads to a larger batch size, resulting in poor generalization and increased memory demands, due to the limitations of the stochastic gradient descent (SGD) optimizer. The proposed random sampling over the inputs of the trunk net mitigates these challenges, improving generalization and reducing memory requirements during training, resulting in significant computational gains. We validate our hypothesis through three benchmark examples, demonstrating substantial reductions in training time while achieving comparable or lower overall test errors relative to the traditional training approach. Our results indicate that incorporating randomization in the trunk network inputs during training enhances the efficiency and robustness of DeepONet, offering a promising avenue for improving the framework's performance in modeling complex physical systems.