🤖 AI Summary
Traditional large language models (LLMs) implicitly encode knowledge within their vast parameter space, rendering factual claims difficult to verify, edit, or update. To address this, we propose the Large Memory Language Model (LMLM), the first framework to jointly pretrain parametric representations with external structured knowledge bases. LMLM introduces a knowledge-aware masking strategy that compels the model to actively retrieve external facts during training—rather than relying solely on implicit parametric memory—and designs a unified encoding mechanism for internal and external knowledge, coupled with a plug-and-play database interface. This enables explicit fact editing, real-time verification, and targeted knowledge updates. Experiments demonstrate that LMLM matches or exceeds the performance of significantly larger LLMs on standard knowledge-intensive benchmarks, while substantially improving factual accuracy, reasoning interpretability, and knowledge maintenance efficiency.
📝 Abstract
Neural language models are black-boxes -- both linguistic patterns and factual knowledge are distributed across billions of opaque parameters. This entangled encoding makes it difficult to reliably inspect, verify, or update specific facts. We propose a new class of language models, Large Memory Language Models (LMLM) with a pre-training recipe that stores factual knowledge in both internal weights and an external database. Our approach strategically masks externally retrieved factual values from the training loss, thereby teaching the model to perform targeted lookups rather than relying on memorization in model weights. Our experiments demonstrate that LMLMs achieve competitive performance compared to significantly larger, knowledge-dense LLMs on standard benchmarks, while offering the advantages of explicit, editable, and verifiable knowledge bases. This work represents a fundamental shift in how language models interact with and manage factual knowledge.