Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models

๐Ÿ“… 2023-08-29
๐Ÿ›๏ธ Neurocomputing
๐Ÿ“ˆ Citations: 21
โœจ Influential: 2
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) suffer from memory decay and response inconsistency in extended dialogues. To address this, we propose a recursive hierarchical memory mechanism that leverages LLM-driven, multi-granularity dialogue summarization to compress conversation history into compact, evolvable memory chainsโ€”without architectural modifications or fine-tuning. Our approach is model-agnostic, compatible with both open- and closed-source LLMs, supports long-context windows (e.g., 8K/16K tokens), and integrates seamlessly with retrieval-augmented generation frameworks. Evaluated on public long-dialogue benchmarks, it significantly improves response consistency. Notably, this is the first method to enable purely prompt-driven, structure-agnostic, and scalable long-term dialogue memory modeling. By eliminating reliance on parameter updates or predefined memory schemas, our framework establishes a novel paradigm for ultra-long dialogue systems, offering robust, interpretable, and computationally efficient memory augmentation grounded entirely in prompting and summarization.
๐Ÿ“ Abstract
Recently, large language models (LLMs), such as GPT-4, stand out remarkable conversational abilities, enabling them to engage in dynamic and contextually relevant dialogues across a wide range of topics. However, given a long conversation, these chatbots fail to recall past information and tend to generate inconsistent responses. To address this, we propose to recursively generate summaries/ memory using large language models (LLMs) to enhance long-term memory ability. Specifically, our method first stimulates LLMs to memorize small dialogue contexts and then recursively produce new memory using previous memory and following contexts. Finally, the chatbot can easily generate a highly consistent response with the help of the latest memory. We evaluate our method on both open and closed LLMs, and the experiments on the widely-used public dataset show that our method can generate more consistent responses in a long-context conversation. Also, we show that our strategy could nicely complement both long-context (e.g., 8K and 16K) and retrieval-enhanced LLMs, bringing further long-term dialogue performance. Notably, our method is a potential solution to enable the LLM to model the extremely long context. The code and scripts will be released later.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs' long-term memory for dialogues
Generate consistent responses in long conversations
Complement long-context and retrieval-enhanced LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recursively generate summaries for long-term memory
Combine previous memory with new dialogue contexts
Enhance consistency in long-context LLM responses
๐Ÿ”Ž Similar Papers
No similar papers found.
Qingyue Wang
Qingyue Wang
Hong Kong University of Science and Technology
Large Language ModelAI SecurityText Generation
L
Liang Ding
The University of Sydney, Australia
Yanan Cao
Yanan Cao
Institute of Information Engineering, Chinese Academy of Sciences
Z
Zhiliang Tian
National University of Defense Technology, China
Shi Wang
Shi Wang
Institute of Computing Technology
knowledge graphnatural language processingneural-symbolic dual-process computing
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining
L
Li Guo
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China