MORPH: Shape-agnostic PDE Foundation Models

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of shape-agnostic foundational models for PDE modeling on heterogeneous spatiotemporal data—encompassing 1D–3D domains, multi-resolution grids, and mixed scalar/vector fields. We propose the first unified autoregressive PDE foundation model. Methodologically, we introduce component-wise convolutions, inter-field cross-attention, and axial attention to preserve physical expressivity while substantially reducing computational cost; adopt a convolutional Vision Transformer (ViT) architecture; and integrate LoRA for parameter-efficient fine-tuning. Experiments demonstrate that our model outperforms from-scratch baselines in both zero-shot and full-data settings, matching or exceeding state-of-the-art methods. It exhibits strong generalization and transferability across dimensions (1D–3D), field types (scalar/vector), and tasks (e.g., forecasting, reconstruction, control), establishing a scalable, physics-informed foundation for diverse PDE-driven applications.

Technology Category

Application Category

📝 Abstract
We introduce MORPH, a shape-agnostic, autoregressive foundation model for partial differential equations (PDEs). MORPH is built on a convolutional vision transformer backbone that seamlessly handles heterogeneous spatiotemporal datasets of varying data dimensionality (1D--3D) at different resolutions, multiple fields with mixed scalar and vector components. The architecture combines (i) component-wise convolution, which jointly processes scalar and vector channels to capture local interactions, (ii) inter-field cross-attention, which models and selectively propagates information between different physical fields, (iii) axial attentions, which factorizes full spatiotemporal self-attention along individual spatial and temporal axes to reduce computational burden while retaining expressivity. We pretrain multiple model variants on a diverse collection of heterogeneous PDE datasets and evaluate transfer to a range of downstream prediction tasks. Using both full-model fine-tuning and parameter-efficient low-rank adapters (LoRA), MORPH outperforms models trained from scratch in both zero-shot and full-shot generalization. Across extensive evaluations, MORPH matches or surpasses strong baselines and recent state-of-the-art models. Collectively, these capabilities present a flexible and powerful backbone for learning from heterogeneous and multimodal nature of scientific observations, charting a path toward scalable and data-efficient scientific machine learning.
Problem

Research questions and friction points this paper is trying to address.

Modeling heterogeneous PDE datasets across varying dimensions
Capturing interactions between mixed scalar and vector fields
Enabling scalable data-efficient scientific machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convolutional vision transformer backbone handles heterogeneous spatiotemporal data
Component-wise convolution processes scalar and vector channels jointly
Axial attention factorizes spatiotemporal self-attention to reduce computation
🔎 Similar Papers
No similar papers found.
M
Mahindra Singh Rautela
Instrumentation and Controls Group (AOT-IC), Los Alamos National Laboratory, Los Alamos, New Mexico, US, 87545
A
Alexander Most
Computing and Artificial Intelligence Division (CAI), Los Alamos National Laboratory, Los Alamos, New Mexico, US, 87545
S
Siddharth Mansingh
Computing and Artificial Intelligence Division (CAI), Los Alamos National Laboratory, Los Alamos, New Mexico, US, 87545
Bradley C. Love
Bradley C. Love
Senior Research Scientist, Los Alamos National Laboratory. Former utexas.edu and ucl.ac.uk prof.
AI for scientific discoverydeep learningcomputational neurosicencehuman-machine teaming
A
Ayan Biswas
Computing and Artificial Intelligence Division (CAI), Los Alamos National Laboratory, Los Alamos, New Mexico, US, 87545
Diane Oyen
Diane Oyen
Los Alamos National Laboratory
Machine Learning
Earl Lawrence
Earl Lawrence
Los Alamos National Laboratory
fencingfightingrevengetrue lovemiracles