🤖 AI Summary
Large language models struggle to track continuously evolving knowledge in dynamic environments, leading to factual lag and reduced accuracy. To address this limitation, this work proposes OAKS—the first online adaptation benchmark framework tailored for continuous knowledge streams—comprising two fine-grained, densely annotated dynamic datasets: OAKS-BABI and OAKS-Novel. The framework leverages streaming context chunking and dynamic fact evolution modeling to simulate real-time knowledge updates. Through systematic evaluation of 14 prominent language models and memory-augmented agents, the study reveals pervasive issues of response latency and susceptibility to interference, highlighting a critical gap in current approaches’ capacity for online knowledge adaptation.
📝 Abstract
LLMs operating in dynamic real-world contexts often encounter knowledge that evolves continuously or emerges incrementally. To remain accurate and effective, models must adapt to newly arriving information on the fly. We introduce Online Adaptation to Continual Knowledge Streams(OAKS) to evaluate this capability, establishing a benchmark for online adaptation over streaming, continually updating knowledge. Specifically, the benchmark is structured as a sequence of fine-grained context chunks where facts change dynamically across time intervals. OAKS comprises two datasets: OAKS-BABI and OAKS-Novel, where individual facts evolve multiple times across context chunks. These datasets include dense annotations to measure whether models track changes accurately. Evaluating 14 models with varied inference approaches, we observe significant limitations in current methodologies. Both state-of-the-art models and agentic memory systems fail to adapt robustly on OAKS, demonstrating delays in state-tracking and susceptibility to distraction within streaming environments.