Tell Me What To Learn: Generalizing Neural Memory to be Controllable in Natural Language

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a general-purpose neural memory system controlled by natural language instructions to address the limited user controllability of existing neural memory models, which struggle to adapt to heterogeneous information streams and dynamic task demands. By integrating a learnable memory architecture with an instruction parsing module, the system enables flexible, natural language–guided control over memory writing and retention—overcoming the constraints of conventional fixed-objective mechanisms. It supports lightweight, on-demand selective learning, allowing users to dictate what information is stored or preserved. Experimental results demonstrate that the proposed approach significantly mitigates catastrophic forgetting across diverse domains and substantially enhances the model’s capacity for selective learning from heterogeneous data streams.

Technology Category

Application Category

📝 Abstract
Modern machine learning models are deployed in diverse, non-stationary environments where they must continually adapt to new tasks and evolving knowledge. Continual fine-tuning and in-context learning are costly and brittle, whereas neural memory methods promise lightweight updates with minimal forgetting. However, existing neural memory models typically assume a single fixed objective and homogeneous information streams, leaving users with no control over what the model remembers or ignores over time. To address this challenge, we propose a generalized neural memory system that performs flexible updates based on learning instructions specified in natural language. Our approach enables adaptive agents to learn selectively from heterogeneous information sources, supporting settings, such as healthcare and customer service, where fixed-objective memory updates are insufficient.
Problem

Research questions and friction points this paper is trying to address.

neural memory
controllable learning
heterogeneous information
natural language instruction
continual adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural memory
natural language instructions
controllable learning
continual adaptation
heterogeneous information
🔎 Similar Papers
No similar papers found.
M
Max S. Bennett
Columbia University
T
Thomas P. Zollo
Columbia University
Richard Zemel
Richard Zemel
Professor of Computer Science, University of Toronto
Machine LearningComputer VisionNeural Coding