PrivGemo: Privacy-Preserving Dual-Tower Graph Retrieval for Empowering LLM Reasoning with Memory Augmentation

πŸ“… 2026-01-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing approaches to enhancing large language model (LLM) reasoning with private knowledge graphs face challenges including structural leakage, uncontrolled remote interactions, fragile multi-hop multi-entity reasoning, and insufficient reuse of past experiences. To address these issues, this work proposes PrivGemo, a framework featuring a dual-tower architecture that locally preserves the original graph while providing an anonymized view to remote LLMs. PrivGemo introduces anonymized path representations beyond simple name masking, a hierarchical privacy controller, and a privacy-aware experience memory mechanism, effectively mitigating both semantic and structural leakage while enabling robust and efficient multi-hop reasoning. Evaluated on six benchmarks, PrivGemo achieves state-of-the-art performance, outperforming the strongest baseline by up to 17.1% and enabling compact models such as Qwen3-4B to match the reasoning capabilities of GPT-4-Turbo.

Technology Category

Application Category

πŸ“ Abstract
Knowledge graphs (KGs) provide structured evidence that can ground large language model (LLM) reasoning for knowledge-intensive question answering. However, many practical KGs are private, and sending retrieved triples or exploration traces to closed-source LLM APIs introduces leakage risk. Existing privacy treatments focus on masking entity names, but they still face four limitations: structural leakage under semantic masking, uncontrollable remote interaction, fragile multi-hop and multi-entity reasoning, and limited experience reuse for stability and efficiency. To address these issues, we propose PrivGemo, a privacy-preserving retrieval-augmented framework for KG-grounded reasoning with memory-guided exposure control. PrivGemo uses a dual-tower design to keep raw KG knowledge local while enabling remote reasoning over an anonymized view that goes beyond name masking to limit both semantic and structural exposure. PrivGemo supports multi-hop, multi-entity reasoning by retrieving anonymized long-hop paths that connect all topic entities, while keeping grounding and verification on the local KG. A hierarchical controller and a privacy-aware experience memory further reduce unnecessary exploration and remote interactions. Comprehensive experiments on six benchmarks show that PrivGemo achieves overall state-of-the-art results, outperforming the strongest baseline by up to 17.1%. Furthermore, PrivGemo enables smaller models (e.g., Qwen3-4B) to achieve reasoning performance comparable to that of GPT-4-Turbo.
Problem

Research questions and friction points this paper is trying to address.

privacy-preserving
knowledge graph
LLM reasoning
structured leakage
memory augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

privacy-preserving retrieval
dual-tower architecture
knowledge graph grounding
multi-hop reasoning
memory-augmented LLM
πŸ”Ž Similar Papers
No similar papers found.