🤖 AI Summary
This work proposes a general-purpose neural memory system controlled by natural language instructions to address the limited user controllability of existing neural memory models, which struggle to adapt to heterogeneous information streams and dynamic task demands. By integrating a learnable memory architecture with an instruction parsing module, the system enables flexible, natural language–guided control over memory writing and retention—overcoming the constraints of conventional fixed-objective mechanisms. It supports lightweight, on-demand selective learning, allowing users to dictate what information is stored or preserved. Experimental results demonstrate that the proposed approach significantly mitigates catastrophic forgetting across diverse domains and substantially enhances the model’s capacity for selective learning from heterogeneous data streams.
📝 Abstract
Modern machine learning models are deployed in diverse, non-stationary environments where they must continually adapt to new tasks and evolving knowledge. Continual fine-tuning and in-context learning are costly and brittle, whereas neural memory methods promise lightweight updates with minimal forgetting. However, existing neural memory models typically assume a single fixed objective and homogeneous information streams, leaving users with no control over what the model remembers or ignores over time. To address this challenge, we propose a generalized neural memory system that performs flexible updates based on learning instructions specified in natural language. Our approach enables adaptive agents to learn selectively from heterogeneous information sources, supporting settings, such as healthcare and customer service, where fixed-objective memory updates are insufficient.