🤖 AI Summary
The mechanisms underlying large language models’ (LLMs) emergence of scientific understanding—particularly physical law comprehension—during in-context learning (ICL) remain poorly understood.
Method: We employ dynamics prediction on real physical systems as a controlled testbed, structurally modeling LLM reasoning via interpretable dynamic analysis. For the first time, we integrate sparse autoencoders to dissect residual stream activations and quantitatively align the extracted implicit representations with fundamental physical variables (e.g., energy).
Contribution/Results: Experiments reveal that prediction accuracy improves monotonically with context length. Crucially, the learned sparse features exhibit statistically significant correlations with physical quantities, demonstrating that LLMs spontaneously develop implicit encodings of physical concepts during ICL. This work provides the first mechanistic, interpretable, and empirically verifiable evidence for emergent scientific reasoning in LLMs.
📝 Abstract
Large language models (LLMs) exhibit impressive in-context learning (ICL) abilities, enabling them to solve wide range of tasks via textual prompts alone. As these capabilities advance, the range of applicable domains continues to expand significantly. However, identifying the precise mechanisms or internal structures within LLMs that allow successful ICL across diverse, distinct classes of tasks remains elusive. Physics-based tasks offer a promising testbed for probing this challenge. Unlike synthetic sequences such as basic arithmetic or symbolic equations, physical systems provide experimentally controllable, real-world data based on structured dynamics grounded in fundamental principles. This makes them particularly suitable for studying the emergent reasoning behaviors of LLMs in a realistic yet tractable setting. Here, we mechanistically investigate the ICL ability of LLMs, especially focusing on their ability to reason about physics. Using a dynamics forecasting task in physical systems as a proxy, we evaluate whether LLMs can learn physics in context. We first show that the performance of dynamics forecasting in context improves with longer input contexts. To uncover how such capability emerges in LLMs, we analyze the model's residual stream activations using sparse autoencoders (SAEs). Our experiments reveal that the features captured by SAEs correlate with key physical variables, such as energy. These findings demonstrate that meaningful physical concepts are encoded within LLMs during in-context learning. In sum, our work provides a novel case study that broadens our understanding of how LLMs learn in context.