Language Models Struggle to Use Representations Learned In-Context

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates, for the first time, the ability of both open- and closed-source large language models (LLMs) to deploy newly acquired semantic representations from in-context learning in downstream tasks. Using probes based on next-token prediction and a novel adaptive world modeling task, the research reveals that even when LLMs successfully encode new semantic information from context, they fail to reliably utilize it for reasoning. This highlights a fundamental limitation in how current models leverage learned representations, underscoring a critical gap in their capacity for semantic generalization and task adaptation. The findings suggest that contextual acquisition of meaning does not necessarily translate into functional deployment, exposing a key challenge in the development of more robust and flexible language models.

Technology Category

Application Category

📝 Abstract
Though large language models (LLMs) have enabled great success across a wide variety of tasks, they still appear to fall short of one of the loftier goals of artificial intelligence research: creating an artificial system that can adapt its behavior to radically new contexts upon deployment. One important step towards this goal is to create systems that can induce rich representations of data that are seen in-context, and then flexibly deploy these representations to accomplish goals. Recently, Park et al. (2024) demonstrated that current LLMs are indeed capable of inducing such representation from context (i.e., in-context representation learning). The present study investigates whether LLMs can use these representations to complete simple downstream tasks. We first assess whether open-weights LLMs can use in-context representations for next-token prediction, and then probe models using a novel task, adaptive world modeling. In both tasks, we find evidence that open-weights LLMs struggle to deploy representations of novel semantics that are defined in-context, even if they encode these semantics in their latent representations. Furthermore, we assess closed-source, state-of-the-art reasoning models on the adaptive world modeling task, demonstrating that even the most performant LLMs cannot reliably leverage novel patterns presented in-context. Overall, this work seeks to inspire novel methods for encouraging models to not only encode information presented in-context, but to do so in a manner that supports flexible deployment of this information.
Problem

Research questions and friction points this paper is trying to address.

in-context learning
representation deployment
language models
adaptive behavior
semantic generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
representation deployment
adaptive world modeling
large language models
semantic generalization
🔎 Similar Papers