Universal Physics Transformers

📅 2024-02-19
🏛️ Neural Information Processing Systems
📈 Citations: 8
Influential: 0
📄 PDF
🤖 AI Summary
Neural operators struggle to generalize across multi-scale and multi-modal physical simulations—such as grid-based fluid dynamics, RANS steady-state solvers, and Lagrangian dynamics—and remain tightly coupled to specific discrete structures (e.g., grids or particles). Method: This work proposes the first structure-agnostic universal Physics Transformer framework. It abandons conventional topological assumptions and introduces an inverse encoding–decoding mechanism, structure-free latent-space modeling, and spatiotemporal adaptive attention, enabling dynamic querying of arbitrary spatiotemporal points and efficient propagation of latent representations. Contribution/Results: The framework achieves unified modeling across simulation paradigms (Eulerian and Lagrangian), significantly enhancing cross-domain transferability and computational scalability. Extensive evaluation on three canonical tasks—grid fluid simulation, RANS solving, and Lagrangian particle dynamics—demonstrates superior generalization and efficiency over existing methods.

Technology Category

Application Category

📝 Abstract
Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.
Problem

Research questions and friction points this paper is trying to address.

Scaling neural operators efficiently
Unifying learning paradigms across simulations
Enhancing flexibility and scalability in physics models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal Physics Transformers framework
Latent space dynamics propagation
Inverse encoding and decoding techniques
🔎 Similar Papers
No similar papers found.
Benedikt Alkin
Benedikt Alkin
Emmi AI
Machine LearningComputer VisionNeural Operators
A
Andreas Furst
ELLIS Unit Linz, Institute for Machine Learning, JKU Linz, Austria
S
Simon Schmid
Software Competence Center Hagenberg GmbH, Hagenberg, Austria
L
Lukas Gruber
ELLIS Unit Linz, Institute for Machine Learning, JKU Linz, Austria
Markus Holzleitner
Markus Holzleitner
Unknown affiliation
Johannes Brandstetter
Johannes Brandstetter
Johannes Kepler University (JKU) Linz
Deep LearningAI4ScienceAI4SimulationPhysics