🤖 AI Summary
This work addresses the limited ability of current language models to dynamically acquire and apply new knowledge—such as domain-specific rules or empirical laws—from task contexts in complex real-world scenarios. To systematically define and evaluate this contextual learning capability, the authors introduce CL-bench, a novel benchmark comprising 500 expert-designed complex contexts, 1,899 tasks, and 31,607 validation rules, structured as context-task-validation triplets. Evaluation across ten state-of-the-art models reveals a significant performance bottleneck, with models completing only 17.2% of tasks on average; even the best-performing model, GPT-5.1, achieves just 23.7%. This benchmark fills a critical gap in assessing dynamic knowledge acquisition and application, highlighting a key limitation in contemporary language models’ contextual learning capacities.
📝 Abstract
Current language models (LMs) excel at reasoning over prompts using pre-trained knowledge. However, real-world tasks are far more complex and context-dependent: models must learn from task-specific context and leverage new knowledge beyond what is learned during pre-training to reason and resolve tasks. We term this capability context learning, a crucial ability that humans naturally possess but has been largely overlooked. To this end, we introduce CL-bench, a real-world benchmark consisting of 500 complex contexts, 1,899 tasks, and 31,607 verification rubrics, all crafted by experienced domain experts. Each task is designed such that the new content required to resolve it is contained within the corresponding context. Resolving tasks in CL-bench requires models to learn from the context, ranging from new domain-specific knowledge, rule systems, and complex procedures to laws derived from empirical data, all of which are absent from pre-training. This goes far beyond long-context tasks that primarily test retrieval or reading comprehension, and in-context learning tasks, where models learn simple task patterns via instructions and demonstrations. Our evaluations of ten frontier LMs find that models solve only 17.2% of tasks on average. Even the best-performing model, GPT-5.1, solves only 23.7%, revealing that LMs have yet to achieve effective context learning, which poses a critical bottleneck for tackling real-world, complex context-dependent tasks. CL-bench represents a step towards building LMs with this fundamental capability, making them more intelligent and advancing their deployment in real-world scenarios.