State-Space Constraints Improve the Generalization of the Differentiable Neural Computer in some Algorithmic Tasks

📅 2021-10-18
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Memory-augmented neural networks (e.g., DNCs) exhibit poor length generalization on algorithmic tasks—such as sorting—when tested on sequences longer than those seen during training. To address this, we propose a state-space regularization framework that significantly enhances DNC’s out-of-distribution sequence-length extrapolation. Our method introduces a dual constraint mechanism: (i) state compression via low-dimensional linear projection, and (ii) L1/L2 regularization on hidden states. We further identify a strong empirical correlation between recurrent structural patterns in the learned state space and generalization performance. Additionally, we design a memory-scalable reinitialization strategy enabling post-pretraining capacity expansion of the DNC’s memory module. Experiments demonstrate substantial improvements in long-sequence generalization on sorting and related tasks, reduced training cost for long sequences, and enable an efficient short-sequence pretraining → long-sequence transfer paradigm.
📝 Abstract
Memory-augmented neural networks (MANNs) can solve algorithmic tasks like sorting. However, they often do not generalize to lengths of input sequences not seen in the training phase. Therefore, we introduce two approaches constraining the state-space of the network controller to improve the generalization to out-of-distribution-sized input sequences: state compression and state regularization. We show that both approaches can improve the generalization capability of a particular type of MANN, the differentiable neural computer (DNC), and compare our approaches to a stateful and a stateless controller on a set of algorithmic tasks. Furthermore, we show that especially the combination of both approaches can enable a pre-trained DNC to be extended post hoc with a larger memory. Thus, our introduced approaches allow to train a DNC using shorter input sequences and thus save computational resources. Moreover, we observed that the capability for generalization is often accompanied by loop structures in the state-space, which could correspond to looping constructs in algorithms.
Problem

Research questions and friction points this paper is trying to address.

Improving neural network generalization to unseen input sequence lengths
Applying state-space constraints to enhance memory-augmented network performance
Enabling processing of longer sequences without retraining through memory extension
Innovation

Methods, ideas, or system contributions that make the work stand out.

State compression and regularization constrain controller state space
Constrained DNC processes sequences 2.3 times longer than baseline
Memory matrix extension enables 10.4 times longer sequence processing
🔎 Similar Papers
2024-02-18arXiv.orgCitations: 3
2024-07-04Science of Computer ProgrammingCitations: 4
P
P. Ofner
Department of Microsystems Engineering, University of Freiburg, Freiburg, Germany
Roman Kern
Roman Kern
Know Center, Graz University of Technology
Natural Language ProcessingMachine LearningCausal Data ScienceTrustworthy AI