๐ค AI Summary
Neural operator training typically requires large quantities of expensive, high-fidelity PDE numerical simulations for labeled data. To address this, we propose an unsupervised coreset selection framework that eliminates the need for ground-truth solution labels. Instead, it leverages physics-informed neural network (PINN) losses to evaluate the physical consistency and informativeness of input samples, then integrates multiple coreset selection strategies to identify the most representative instancesโonly these are subjected to costly numerical solving. By embedding physical constraints directly into the sample selection process, our method significantly reduces annotation overhead and simulation cost. Evaluated on four canonical PDE benchmarks, it achieves an average 78% improvement in training efficiency, with negligible accuracy degradation, and consistently outperforms existing supervised coreset methods.
๐ Abstract
Neural operators offer a powerful paradigm for solving partial differential equations (PDEs) that cannot be solved analytically by learning mappings between function spaces. However, there are two main bottlenecks in training neural operators: they require a significant amount of training data to learn these mappings, and this data needs to be labeled, which can only be accessed via expensive simulations with numerical solvers. To alleviate both of these issues simultaneously, we propose PICore, an unsupervised coreset selection framework that identifies the most informative training samples without requiring access to ground-truth PDE solutions. PICore leverages a physics-informed loss to select unlabeled inputs by their potential contribution to operator learning. After selecting a compact subset of inputs, only those samples are simulated using numerical solvers to generate labels, reducing annotation costs. We then train the neural operator on the reduced labeled dataset, significantly decreasing training time as well. Across four diverse PDE benchmarks and multiple coreset selection strategies, PICore achieves up to 78% average increase in training efficiency relative to supervised coreset selection methods with minimal changes in accuracy. We provide code at https://github.com/Asatheesh6561/PICore.