🤖 AI Summary
This work addresses the stability-plasticity imbalance in gated linear attention models during in-context learning, which arises from fixed memory capacity and interference. Framing the problem as a continual learning task, the authors propose the Palimpsa framework, which incorporates a Bayesian meta-plasticity mechanism to dynamically modulate the plasticity of attention states based on their importance, thereby enabling intelligent retention and forgetting of memories. Palimpsa unifies various gated attention architectures, revealing Mamba2 as a special case dominated by forgetting, and provides a general method to convert any non-meta-plastic model into a meta-plastic one. Experiments demonstrate that the approach significantly outperforms baselines on the MQAR benchmark and commonsense reasoning tasks, effectively expanding effective memory capacity and enhancing long-sequence processing performance.
📝 Abstract
In-Context Learning (ICL) in transformers acts as an online associative memory and is believed to underpin their high performance on complex sequence processing tasks. However, in gated linear attention models, this memory has a fixed capacity and is prone to interference, especially for long sequences. We propose Palimpsa, a self-attention model that views ICL as a continual learning problem that must address a stability-plasticity dilemma. Palimpsa uses Bayesian metaplasticity, where the plasticity of each attention state is tied to an importance state grounded by a prior distribution that captures accumulated knowledge. We demonstrate that various gated linear attention models emerge as specific architecture choices and posterior approximations, and that Mamba2 is a special case of Palimpsa where forgetting dominates. This theoretical link enables the transformation of any non-metaplastic model into a metaplastic one, significantly expanding its memory capacity. Our experiments show that Palimpsa consistently outperforms baselines on the Multi-Query Associative Recall (MQAR) benchmark and on Commonsense Reasoning tasks.