🤖 AI Summary
Existing recurrent models (e.g., LSTM/GRU) face three fundamental limitations in long-context understanding and extrapolation: fixed memory capacity, online update mechanisms optimized only for the final token, and low expressivity in memory management. This paper introduces ATLAS, a novel long-term memory module that enables test-time, context-aware, global joint optimization of memory—achieving scalable capacity, non-online updates, and highly expressive memory modeling. Furthermore, we formally define DeepTransformer as a strict superset of Transformer, preserving all its capabilities while enhancing long-range dependency modeling. Experiments demonstrate that ATLAS consistently outperforms both standard Transformers and linear RNNs across diverse tasks—including language modeling, commonsense reasoning, retrieval-intensive benchmarks, and long-context evaluation. Notably, on BABILong with 10M tokens, ATLAS achieves an 80% accuracy improvement over baseline models.
📝 Abstract
Transformers have been established as the most popular backbones in sequence modeling, mainly due to their effectiveness in in-context retrieval tasks and the ability to learn at scale. Their quadratic memory and time complexity, however, bound their applicability in longer sequences and so has motivated researchers to explore effective alternative architectures such as modern recurrent neural networks (a.k.a long-term recurrent memory module). Despite their recent success in diverse downstream tasks, they struggle in tasks that requires long context understanding and extrapolation to longer sequences. We observe that these shortcomings come from three disjoint aspects in their design: (1) limited memory capacity that is bounded by the architecture of memory and feature mapping of the input; (2) online nature of update, i.e., optimizing the memory only with respect to the last input; and (3) less expressive management of their fixed-size memory. To enhance all these three aspects, we present ATLAS, a long-term memory module with high capacity that learns to memorize the context by optimizing the memory based on the current and past tokens, overcoming the online nature of long-term memory models. Building on this insight, we present a new family of Transformer-like architectures, called DeepTransformers, that are strict generalizations of the original Transformer architecture. Our experimental results on language modeling, common-sense reasoning, recall-intensive, and long-context understanding tasks show that ATLAS surpasses the performance of Transformers and recent linear recurrent models. ATLAS further improves the long context performance of Titans, achieving +80% accuracy in 10M context length of BABILong benchmark.