HMT: Hierarchical Memory Transformer for Efficient Long Context Language Processing

๐Ÿ“… 2024-05-09
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of long-context processing in large language models (LLMs) under memory constraints, this paper proposes the Hierarchical Memory Transformer (HMT), the first Transformer architecture explicitly inspired by the hierarchical organization of human memory. HMT introduces segment-level memory recurrence, cross-segment memory embedding propagation, and selective historical recallโ€”overcoming the limitations of conventional flat memory designs. It further incorporates hierarchical memory compression, dynamic retrieval, and lightweight parametric memory updates to enable efficient long-range dependency modeling. Empirical evaluation across language modeling, question answering, and summarization tasks demonstrates that HMT significantly improves long-context performance while reducing parameter count by 2ร—โ€“57ร— and inference memory usage by 2.5ร—โ€“116ร—, achieving generation quality on par with or superior to state-of-the-art long-context LLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
Transformer-based large language models (LLM) have been widely used in language processing applications. However, due to the memory constraints of the devices, most of them restrict the context window. Even though recurrent models in previous works can memorize past tokens to enable unlimited context and maintain effectiveness, they have ``flat'' memory architectures. Such architectures have limitations in selecting and filtering information. Since humans are good at learning and self-adjustment, we believe that imitating brain memory hierarchy is beneficial for model memorization. Thus, we propose the Hierarchical Memory Transformer (HMT), a novel framework that facilitates a model's long-context processing ability by imitating human memorization behavior. Leveraging memory-augmented segment-level recurrence, we organize the memory hierarchy by preserving tokens from early input segments, passing memory embeddings along the sequence, and recalling relevant information from history. Evaluating general language modeling, question-answering tasks, and the summarization task, we show that HMT consistently improves the long-context processing ability of existing models. Furthermore, HMT achieves a comparable or superior generation quality to long-context LLMs with $2 sim 57 imes$ fewer parameters and $2.5 sim 116 imes$ less inference memory, significantly outperforming previous memory-augmented models. Code on Github: https://github.com/OswaldHe/HMT-pytorch.
Problem

Research questions and friction points this paper is trying to address.

Enhances long-context processing in language models
Imitates human memory hierarchy for better information filtering
Reduces model parameters and inference memory usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Memory Transformer framework
Memory-augmented segment-level recurrence
Imitates human memory hierarchy
๐Ÿ”Ž Similar Papers
No similar papers found.