PersonalAI: Towards digital twins in the graph form

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to model long-term, personalized user history and generate temporally aware responses. Method: This paper proposes a hypergraph-based digital twin modeling framework for users. It introduces a composite knowledge graph integrating standard edges and two types of hyperedges, enabling LLMs to autonomously construct and dynamically update user representations. Built upon an extended AriGraph hypergraph architecture, the framework incorporates retrieval-augmented generation (RAG), temporal dialogue modeling, and robust reasoning over contradictory statements to ensure persistent memory of personalized knowledge and consistent inference. Contribution/Results: Evaluated on TriviaQA, HotpotQA, and an enhanced version of DiaASQ, the method demonstrates stable question-answering accuracy under temporal constraints and contradictory inputs, confirming strong robustness in knowledge graph construction and sustained personalization capability.

Technology Category

Application Category

📝 Abstract
The challenge of personalizing language models, specifically the ability to account for a user's history during interactions, is of significant interest. Despite recent advancements in large language models (LLMs) and Retrieval Augmented Generation that have enhanced the factual base of LLMs, the task of retaining extensive personal information and using it to generate personalized responses remains pertinent. To address this, we propose utilizing external memory in the form of knowledge graphs, which are constructed and updated by the LLM itself. We have expanded upon ideas of AriGraph architecture and for the first time introduced a combined graph featuring both standard edges and two types of hyperedges. Experiments conducted on the TriviaQA, HotpotQA and DiaASQ benchmarks indicates that this approach aids in making the process of graph construction and knowledge extraction unified and robust. Furthermore, we augmented the DiaASQ benchmark by incorporating parameters such as time into dialogues and introducing contradictory statements made by the same speaker at different times. Despite these modifications, the performance of the question-answering system remained robust, demonstrating the proposed architecture's ability to maintain and utilize temporal dependencies.
Problem

Research questions and friction points this paper is trying to address.

Personalizing language models using user history
Retaining personal information for personalized responses
Utilizing knowledge graphs for robust memory management
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge graphs as external memory
Combined graph with hyperedges
Robust temporal dependency handling
🔎 Similar Papers
No similar papers found.
M
Mikhail Menschikov
Skoltech, Moscow, Russia
D
Dmitry Evseev
Skoltech, Moscow, Russia
Ruslan Kostoev
Ruslan Kostoev
Moscow State University
I
Ilya Perepechkin
Public joint stock company "Sberbank of Russia", Moscow, Russia
I
Ilnaz Salimov
Public joint stock company "Sberbank of Russia", Moscow, Russia
V
Victoria Dochkina
Public joint stock company "Sberbank of Russia", Moscow, Russia
Petr Anokhin
Petr Anokhin
Lomonosov Moscow State University; Федеральный медицинский исследовательский центр
Evgeny Burnaev
Evgeny Burnaev
Skoltech, Full Professor, Head of AI center, Head of research group, AIRI
Generative ModelingManifold LearningSurrogate Modeling3D Deep Learning
N
Nikita Semenov
Skoltech, Moscow, Russia