🤖 AI Summary
This work addresses the poor interpretability and analytical intractability of feed-forward networks (FFNs) in large language models, which stem from their tight coupling with self-attention mechanisms. The authors propose decoupling FFNs into context-independent, token-wise neural memory modules implemented via precomputed Token-level Lookup tables (ToL), enabling efficient inference. They introduce a plug-and-play, interpretable FFN memory architecture that supports context-free training and on-demand loading. Furthermore, a hybrid Flex-MemoryLLM framework is designed to balance model performance with the degree of decoupling. This approach not only preserves competitive model performance but also significantly enhances inference efficiency, while revealing the critical role of FFNs as memory components across diverse tasks.
📝 Abstract
Understanding how transformer components operate in LLMs is important, as it is at the core of recent technological advances in artificial intelligence. In this work, we revisit the challenges associated with interpretability of feed-forward modules (FFNs) and propose MemoryLLM, which aims to decouple FFNs from self-attention and enables us to study the decoupled FFNs as context-free token-wise neural retrieval memory. In detail, we investigate how input tokens access memory locations within FFN parameters and the importance of FFN memory across different downstream tasks. MemoryLLM achieves context-free FFNs by training them in isolation from self-attention directly using the token embeddings. This approach allows FFNs to be pre-computed as token-wise lookups (ToLs), enabling on-demand transfer between VRAM and storage, additionally enhancing inference efficiency. We also introduce Flex-MemoryLLM, positioning it between a conventional transformer design and MemoryLLM. This architecture bridges the performance gap caused by training FFNs with context-free token-wise embeddings.