🤖 AI Summary
Linear RNNs suffer from limited performance in long-context language modeling due to their inability to adapt dynamically to extended dependencies. Method: This paper introduces MesaNet, a novel RNN architecture that performs stepwise optimal optimization at test time. Its core innovation is the first incorporation of rigorous in-context optimization—implemented via real-time conjugate gradient solving—into sequence modeling, achieving locally optimal contextual adaptation while preserving linear-time inference complexity. We further design the Mesa layer, which ensures numerical stability and supports chunkwise parallelization, thereby overcoming modeling bottlenecks inherent in existing linear RNNs (e.g., Mamba, xLSTM). Results: At the billion-parameter scale, MesaNet significantly reduces perplexity in language modeling and outperforms state-of-the-art linear RNNs on long-range understanding benchmarks—including PG19 and BookSum—demonstrating that test-time optimization meaningfully enhances model capability.
📝 Abstract
Sequence modeling is currently dominated by causal transformer architectures that use softmax self-attention. Although widely adopted, transformers require scaling memory and compute linearly during inference. A recent stream of work linearized the softmax operation, resulting in powerful recurrent neural network (RNN) models with constant memory and compute costs such as DeltaNet, Mamba or xLSTM. These models can be unified by noting that their recurrent layer dynamics can all be derived from an in-context regression objective, approximately optimized through an online learning rule. Here, we join this line of work and introduce a numerically stable, chunkwise parallelizable version of the recently proposed Mesa layer (von Oswald et al., 2024), and study it in language modeling at the billion-parameter scale. This layer again stems from an in-context loss, but which is now minimized to optimality at every time point using a fast conjugate gradient solver. Through an extensive suite of experiments, we show that optimal test-time training enables reaching lower language modeling perplexity and higher downstream benchmark performance than previous RNNs, especially on tasks requiring long context understanding. This performance gain comes at the cost of additional flops spent during inference time. Our results are therefore intriguingly related to recent trends of increasing test-time compute to improve performance -- here by spending compute to solve sequential optimization problems within the neural network itself.