🤖 AI Summary
Solving time-varying partial differential equations (PDEs) on dense discrete spatial grids incurs prohibitive computational cost, while existing neural surrogate models suffer from excessive memory consumption on irregularly sampled domains. To address these challenges, we propose a continuous adaptive convolutional modeling framework built upon a compressed latent space. Our core innovation is the first-ever ε-neighborhood-constrained continuous convolutional encoder–decoder architecture, enabling query-point-adaptive optimization and overcoming the dual bottlenecks of conventional CNNs—rigid grid dependence—and Transformers—quadratic memory complexity. By integrating continuous convolutions, compact latent representations, and a lightweight design, our method achieves performance on par with or superior to state-of-the-art baselines on both regular and irregular PDE benchmarks, while reducing GPU memory usage by over 40% and accelerating inference by more than 3×.
📝 Abstract
Solving time-dependent Partial Differential Equations (PDEs) using a densely discretized spatial domain is a fundamental problem in various scientific and engineering disciplines, including modeling climate phenomena and fluid dynamics. However, performing these computations directly in the physical space often incurs significant computational costs. To address this issue, several neural surrogate models have been developed that operate in a compressed latent space to solve the PDE. While these approaches reduce computational complexity, they often use Transformer-based attention mechanisms to handle irregularly sampled domains, resulting in increased memory consumption. In contrast, convolutional neural networks allow memory-efficient encoding and decoding but are limited to regular discretizations. Motivated by these considerations, we propose CALM-PDE, a model class that efficiently solves arbitrarily discretized PDEs in a compressed latent space. We introduce a novel continuous convolution-based encoder-decoder architecture that uses an epsilon-neighborhood-constrained kernel and learns to apply the convolution operator to adaptive and optimized query points. We demonstrate the effectiveness of CALM-PDE on a diverse set of PDEs with both regularly and irregularly sampled spatial domains. CALM-PDE is competitive with or outperforms existing baseline methods while offering significant improvements in memory and inference time efficiency compared to Transformer-based methods.