Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a pervasive “context regurgitation” phenomenon in scientific time-series foundation models—where predictions are generated by directly copying historical trajectories rather than learning underlying physical dynamics. Method: The authors formally define and name this mechanism for the first time, establishing theoretical connections to induction heads and the fractal dimension of dynamical system attractors; they further develop a lightweight, zero-shot regurgitation baseline grounded in dynamical systems analysis and fractal geometry. Contribution/Results: Experiments demonstrate that this baseline outperforms state-of-the-art models across diverse chaotic and nonlinear systems while incurring negligible computational cost. Moreover, the study reveals a scaling law governing prediction accuracy versus context length—one governed precisely by the attractor’s fractal dimension—providing a novel, physics-informed benchmark for evaluating genuine emergent capabilities in scientific AI.

Technology Category

Application Category

📝 Abstract
Recently-developed time series foundation models for scientific machine learning exhibit emergent abilities to predict physical systems. These abilities include zero-shot forecasting, in which a model forecasts future states of a system given only a short trajectory as context. Here, we show that foundation models applied to physical systems can give accurate predictions, but that they fail to develop meaningful representations of the underlying physics. Instead, foundation models often forecast by context parroting, a simple zero-shot forecasting strategy that copies directly from the context. As a result, a naive direct context parroting model scores higher than state-of-the-art time-series foundation models on predicting a diverse range of dynamical systems, at a tiny fraction of the computational cost. We draw a parallel between context parroting and induction heads, which explains why large language models trained on text can be repurposed for time series forecasting. Our dynamical systems perspective also ties the scaling between forecast accuracy and context length to the fractal dimension of the attractor, providing insight into the previously observed in-context neural scaling laws. Context parroting thus serves as a simple but tough-to-beat baseline for future time-series foundation models and can help identify in-context learning strategies beyond parroting.
Problem

Research questions and friction points this paper is trying to address.

Foundation models fail to learn underlying physics in scientific machine learning.
Context parroting outperforms state-of-the-art time-series foundation models.
Scaling forecast accuracy relates to attractor fractal dimension.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses direct context parroting for forecasting
Compares to induction heads in language models
Links accuracy scaling to fractal dimension
🔎 Similar Papers
No similar papers found.