When do World Models Successfully Learn Dynamical Systems?

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the conditions under which world models can effectively learn physical dynamical systems from low-dimensional projections and concatenations—i.e., “tokenizations”—of historical frames. We establish a rigorous theoretical framework characterizing the necessary and sufficient conditions for a dynamical system to be accurately modeled via tokenized history sequences, clarifying why compact temporal representations suffice for precise future-state reconstruction and thereby bridging the theoretical gap between representation learning and dynamics modeling. Methodologically, we adopt a progressive modeling paradigm—from linear regression and shallow adversarial networks to full GANs—and validate our approach on canonical PDEs (heat equation, wave equation, chaotic Kuramoto–Sivashinsky equation) and a 2D Karman vortex street CFD dataset. Results demonstrate that high-fidelity reconstruction of complex nonlinear evolution is achievable using only compact latent sequences, significantly enhancing interpretability and out-of-distribution generalization of physics-informed world models.

Technology Category

Application Category

📝 Abstract
In this work, we explore the use of compact latent representations with learned time dynamics ('World Models') to simulate physical systems. Drawing on concepts from control theory, we propose a theoretical framework that explains why projecting time slices into a low-dimensional space and then concatenating to form a history ('Tokenization') is so effective at learning physics datasets, and characterise when exactly the underlying dynamics admit a reconstruction mapping from the history of previous tokenized frames to the next. To validate these claims, we develop a sequence of models with increasing complexity, starting with least-squares regression and progressing through simple linear layers, shallow adversarial learners, and ultimately full-scale generative adversarial networks (GANs). We evaluate these models on a variety of datasets, including modified forms of the heat and wave equations, the chaotic regime 2D Kuramoto-Sivashinsky equation, and a challenging computational fluid dynamics (CFD) dataset of a 2D Kármán vortex street around a fixed cylinder, where our model is successfully able to recreate the flow.
Problem

Research questions and friction points this paper is trying to address.

When do compact latent representations effectively simulate physical systems?
Why does tokenization improve learning in physics datasets?
Can World Models accurately reconstruct complex dynamical systems like fluid dynamics?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compact latent representations with learned dynamics
Tokenization of time slices for physics learning
Progressive model complexity from regression to GANs
🔎 Similar Papers
No similar papers found.
E
Edmund Ross
Technical University of Berlin, Institute of Mathematics
C
Claudia Drygala
Technical University of Berlin, Institute of Mathematics
L
Leonhard Schwarz
Technical University of Berlin, Institute of Mathematics
S
Samir Kaiser
Technical University of Berlin, Institute of Mathematics
F
Francesca di Mare
Ruhr University Bochum, Department of Mechanical Engineering, Chair of Thermal Turbomachines and Aero Engines
T
Tobias Breiten
Technical University of Berlin, Institute of Mathematics
Hanno Gottschalk
Hanno Gottschalk
Professor for Mathematical Modeling of Industrial Life Cycles, TU Berlin
Applied MathematicsComputer VisionMachine LearningMathematical Physics