🤖 AI Summary
This work addresses the limitation of conventional Transformers, which lack a native knowledge retrieval mechanism and rely on inefficient computation to simulate memory access, thereby constraining their performance on knowledge-intensive tasks. The authors propose “conditional memory” as a novel dimension of sparsity, introducing an Engram module based on modern N-gram embeddings to enable O(1) static knowledge lookup. This module is co-optimized with a Mixture-of-Experts (MoE) architecture to balance neural computation and memory invocation. For the first time, scalable static memory is integrated into sparse large language models, revealing a U-shaped scaling law between memory capacity and model performance. The memory module alleviates the burden on the backbone network, allowing it to focus on complex reasoning. The method significantly improves performance on knowledge benchmarks such as MMLU and CMMLU, yields even greater gains on reasoning and code tasks including BBH, ARC, HumanEval, and MATH, and boosts long-context retrieval accuracy from 84.2% to 97.0%.
📝 Abstract
While Mixture-of-Experts (MoE) scales capacity via conditional computation, Transformers lack a native primitive for knowledge lookup, forcing them to inefficiently simulate retrieval through computation. To address this, we introduce conditional memory as a complementary sparsity axis, instantiated via Engram, a module that modernizes classic $N$-gram embedding for O(1) lookup. By formulating the Sparsity Allocation problem, we uncover a U-shaped scaling law that optimizes the trade-off between neural computation (MoE) and static memory (Engram). Guided by this law, we scale Engram to 27B parameters, achieving superior performance over a strictly iso-parameter and iso-FLOPs MoE baseline. Most notably, while the memory module is expected to aid knowledge retrieval (e.g., MMLU +3.4; CMMLU +4.0), we observe even larger gains in general reasoning (e.g., BBH +5.0; ARC-Challenge +3.7) and code/math domains~(HumanEval +3.0; MATH +2.4). Mechanistic analyses reveal that Engram relieves the backbone's early layers from static reconstruction, effectively deepening the network for complex reasoning. Furthermore, by delegating local dependencies to lookups, it frees up attention capacity for global context, substantially boosting long-context retrieval (e.g., Multi-Query NIAH: 84.2 to 97.0). Finally, Engram establishes infrastructure-aware efficiency: its deterministic addressing enables runtime prefetching from host memory, incurring negligible overhead. We envision conditional memory as an indispensable modeling primitive for next-generation sparse models.