On the Value of Tokeniser Pretraining in Physics Foundation Models

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of inefficiency and limited accuracy in joint representation learning and dynamical modeling of high-dimensional physical data. It presents the first systematic investigation into the value of tokenizer pretraining within physics foundation models, introducing a domain-aligned autoencoding pretraining strategy coupled with a runtime-adjustable spatiotemporal compression mechanism to jointly optimize compact representations and simulation efficiency. Experimental results demonstrate that in-domain pretraining reduces the volume-weighted root mean square error (VRMSE) by 64% in just 10,500 training steps, substantially outperforming training from scratch. Cross-domain pretraining also yields consistent improvements, confirming the effectiveness of the proposed approach in enhancing both the accuracy and computational efficiency of physical simulations.

Technology Category

Application Category

📝 Abstract
We investigate the impact of tokeniser pretraining on the accuracy and efficiency of physics emulation. Modern high-resolution simulations produce vast volumes of data spanning diverse physical regimes and scales. Training foundation models to learn the dynamics underlying such data enables the modelling of complex multiphysics phenomena, especially in data-limited settings. The emerging class of physics foundation models typically aims to learn two tasks jointly: (i) extracting compact representations of high-resolution spatiotemporal data, and (ii) capturing governing physical dynamics. However, learning both tasks from scratch simultaneously can impede the effectiveness of either process. We demonstrate that pretraining the tokeniser with an autoencoding objective prior to training the dynamics model enhances computational efficiency for downstream tasks. Notably, the magnitude of this benefit depends on domain alignment: pretraining on the same physical system as the downstream task yields the largest improvements, while pretraining on other systems provides moderate gains. In-domain pretraining reduces VRMSE by 64% after 10,500 training steps compared to training from scratch. To our knowledge, this is the first systematic investigation of tokeniser pretraining for physics foundation models. We further introduce flexible spatiotemporal compression operations that extend causal convolutions to support runtime-adjustable compression ratios, enabling efficient adaptation to diverse downstream tasks. Our findings provide practical guidance for training efficient physics emulators and highlight the importance of strategic pretraining data selection.
Problem

Research questions and friction points this paper is trying to address.

tokeniser pretraining
physics foundation models
spatiotemporal data
computational efficiency
domain alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

tokeniser pretraining
physics foundation models
spatiotemporal compression
autoencoding
causal convolutions
🔎 Similar Papers
No similar papers found.