🤖 AI Summary
Memory-augmented neural networks (e.g., DNCs) exhibit poor length generalization on algorithmic tasks—such as sorting—when tested on sequences longer than those seen during training. To address this, we propose a state-space regularization framework that significantly enhances DNC’s out-of-distribution sequence-length extrapolation. Our method introduces a dual constraint mechanism: (i) state compression via low-dimensional linear projection, and (ii) L1/L2 regularization on hidden states. We further identify a strong empirical correlation between recurrent structural patterns in the learned state space and generalization performance. Additionally, we design a memory-scalable reinitialization strategy enabling post-pretraining capacity expansion of the DNC’s memory module. Experiments demonstrate substantial improvements in long-sequence generalization on sorting and related tasks, reduced training cost for long sequences, and enable an efficient short-sequence pretraining → long-sequence transfer paradigm.
📝 Abstract
Memory-augmented neural networks (MANNs) can solve algorithmic tasks like sorting. However, they often do not generalize to lengths of input sequences not seen in the training phase. Therefore, we introduce two approaches constraining the state-space of the network controller to improve the generalization to out-of-distribution-sized input sequences: state compression and state regularization. We show that both approaches can improve the generalization capability of a particular type of MANN, the differentiable neural computer (DNC), and compare our approaches to a stateful and a stateless controller on a set of algorithmic tasks. Furthermore, we show that especially the combination of both approaches can enable a pre-trained DNC to be extended post hoc with a larger memory. Thus, our introduced approaches allow to train a DNC using shorter input sequences and thus save computational resources. Moreover, we observed that the capability for generalization is often accompanied by loop structures in the state-space, which could correspond to looping constructs in algorithms.