RGMem: Renormalization Group-based Memory Evolution for Language Agent User Profile

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM-based dialogue systems suffer from fixed-context windows and static memory mechanisms, hindering the modeling of long-term user states and behavioral consistency across sessions. Mainstream approaches—such as RAG and explicit memory—support only fact-level storage and lack the capacity to distill deep preferences and latent user characteristics from fragmented, multi-turn dialogues. To address this, we propose a dynamic memory evolution framework inspired by the physics-inspired renormalization group (RG) theory, leveraging multi-scale information compression and emergent phenomena. Our method hierarchically coarse-grains and rescales dialogue histories, jointly optimizing semantic extraction and user insight to construct an explicitly structured, self-evolving memory. Experiments demonstrate significant improvements in user profile depth and robustness: the framework enables persistent long-term memory updates under noise, enhances personalized response quality, and strengthens cross-session coherence.

Technology Category

Application Category

📝 Abstract
Personalized and continuous interactions are the key to enhancing user experience in today's large language model (LLM)-based conversational systems, however, the finite context windows and static parametric memory make it difficult to model the cross-session long-term user states and behavioral consistency. Currently, the existing solutions to this predicament, such as retrieval-augmented generation (RAG) and explicit memory systems, primarily focus on fact-level storage and retrieval, lacking the capability to distill latent preferences and deep traits from the multi-turn dialogues, which limits the long-term and effective user modeling, directly leading to the personalized interactions remaining shallow, and hindering the cross-session continuity. To realize the long-term memory and behavioral consistency for Language Agents in LLM era, we propose a self-evolving memory framework RGMem, inspired by the ideology of classic renormalization group (RG) in physics, this framework enables to organize the dialogue history in multiple scales: it first extracts semantics and user insights from episodic fragments, then through hierarchical coarse-graining and rescaling operations, progressively forms a dynamically-evolved user profile. The core innovation of our work lies in modeling memory evolution as a multi-scale process of information compression and emergence, which accomplishes the high-level and accurate user profiles from noisy and microscopic-level interactions.
Problem

Research questions and friction points this paper is trying to address.

Modeling cross-session long-term user states in LLMs
Distilling latent preferences from multi-turn dialogues
Achieving behavioral consistency in personalized conversational systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Renormalization group-inspired multi-scale memory organization
Hierarchical coarse-graining extracts latent user preferences
Dynamic memory evolution through information compression and emergence
🔎 Similar Papers
No similar papers found.