🤖 AI Summary
This study investigates how the representational geometry of large language models dynamically adapts during in-context learning across different task types, with a focus on whether the “representational straightening” phenomenon occurs universally. By analyzing neural trajectories of Gemma 2 across diverse tasks—including natural language, grid-world navigation, and few-shot learning—and combining representational geometry analysis with straightness metrics, the work reveals that in-context learning is not a uniform mechanism but a task-structure-dependent strategy selection process. Specifically, continuous prediction tasks exhibit markedly increased representational straightness and performance as context lengthens, whereas structured tasks show straightening only during explicit template phases. These findings support a “Swiss Army knife” view of large language models, wherein they dynamically switch strategies based on task demands, challenging the prevailing assumption of a single, monolithic in-context learning process.
📝 Abstract
Large Language Models (LLMs) have been shown to organize the representations of input sequences into straighter neural trajectories in their deep layers, which has been hypothesized to facilitate next-token prediction via linear extrapolation. Language models can also adapt to diverse tasks and learn new structure in context, and recent work has shown that this in-context learning (ICL) can be reflected in representational changes. Here we bring these two lines of research together to explore whether representation straightening occurs \emph{within} a context during ICL. We measure representational straightening in Gemma 2 models across a diverse set of in-context tasks, and uncover a dichotomy in how LLMs'representations change in context. In continual prediction settings (e.g., natural language, grid world traversal tasks) we observe that increasing context increases the straightness of neural sequence trajectories, which is correlated with improvement in model prediction. Conversely, in structured prediction settings (e.g., few-shot tasks), straightening is inconsistent -- it is only present in phases of the task with explicit structure (e.g., repeating a template), but vanishes elsewhere. These results suggest that ICL is not a monolithic process. Instead, we propose that LLMs function like a Swiss Army knife: depending on task structure, the LLM dynamically selects between strategies, only some of which yield representational straightening.