🤖 AI Summary
Existing large language models (LLMs) predominantly rely on entity replacement for knowledge updating, failing to handle complex, dynamic real-world knowledge evolution. To address this, we propose the Knowledge Updating Playground (KUP)—the first automated benchmark enabling joint evaluation of memory retention and logical reasoning. We further introduce Memory-Conditioned Training (MCT), a lightweight method that conditions model generation on dedicated memory tokens, explicitly guiding inference-time retrieval and logical integration of updated knowledge—thereby overcoming the limitation of conventional fine-tuning, which only reinforces surface-level memorization. KUP employs a multi-tiered evaluation protocol—including direct/indirect probing and evidence-driven continual pretraining—posing extreme difficulty (top-performing fine-tuned models achieve <2% reasoning accuracy). Empirically, MCT yields up to 25.4% absolute improvement on memory tasks, significantly outperforming state-of-the-art approaches.
📝 Abstract
Large language models (LLMs) encode vast amounts of pre-trained knowledge in their parameters, but updating them as real-world information evolves remains a challenge. Existing methodologies and benchmarks primarily target entity substitutions, failing to capture the full breadth of complex real-world dynamics. In this paper, we introduce Knowledge Update Playground (KUP), an automatic pipeline for simulating realistic knowledge updates reflected in an evidence corpora. KUP's evaluation framework includes direct and indirect probes to both test memorization of updated facts and reasoning over them, for any update learning methods. Next, we present a lightweight method called memory conditioned training (MCT), which conditions tokens in the update corpus on self-generated"memory"tokens during training. Our strategy encourages LLMs to surface and reason over newly memorized knowledge at inference. Our results on two strong LLMs show that (1) KUP benchmark is highly challenging, with the best CPT models achieving $<2%$ in indirect probing setting (reasoning) and (2) MCT training significantly outperforms prior continued pre-training (CPT) baselines, improving direct probing (memorization) results by up to $25.4%$.