Topology Matters: Measuring Memory Leakage in Multi-Agent LLMs

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work lacks systematic modeling of the relationship between graph topology and privacy leakage in multi-agent large language model (LLM) systems. Method: We propose MAMA, a framework that integrates synthetically generated PII-annotated documents, desensitization task instructions, and a two-stage Engram-Resonance attack protocol to conduct multi-round interactive experiments across six canonical network topologies and multiple LLMs. Contribution/Results: Our study is the first to quantitatively characterize how topology governs memory leakage. We find that shorter graph distances and higher node centrality correlate strongly with increased leakage; fully connected topologies exhibit the highest leakage, while chain topologies are most robust. Leakage rises rapidly initially and asymptotically converges over time. Based on these findings, we derive topology-aware privacy-preserving design principles. Crucially, while model architecture affects absolute leakage magnitude, it does not alter the relative risk ranking across topologies.

Technology Category

Application Category

📝 Abstract
Graph topology is a fundamental determinant of memory leakage in multi-agent LLM systems, yet its effects remain poorly quantified. We introduce MAMA (Multi-Agent Memory Attack), a framework that measures how network structure shapes leakage. MAMA operates on synthetic documents containing labeled Personally Identifiable Information (PII) entities, from which we generate sanitized task instructions. We execute a two-phase protocol: Engram (seeding private information into a target agent's memory) and Resonance (multi-round interaction where an attacker attempts extraction). Over up to 10 interaction rounds, we quantify leakage as the fraction of ground-truth PII recovered from attacking agent outputs via exact matching. We systematically evaluate six common network topologies (fully connected, ring, chain, binary tree, star, and star-ring), varying agent counts $nin{4,5,6}$, attacker-target placements, and base models. Our findings reveal consistent patterns: fully connected graphs exhibit maximum leakage while chains provide strongest protection; shorter attacker-target graph distance and higher target centrality significantly increase vulnerability; leakage rises sharply in early rounds before plateauing; model choice shifts absolute leakage rates but preserves topology rankings; temporal/locational PII attributes leak more readily than identity credentials or regulated identifiers. These results provide the first systematic mapping from architectural choices to measurable privacy risk, yielding actionable guidance: prefer sparse or hierarchical connectivity, maximize attacker-target separation, limit node degree and network radius, avoid shortcuts bypassing hubs, and implement topology-aware access controls.
Problem

Research questions and friction points this paper is trying to address.

Quantifies memory leakage across multi-agent LLM network topologies
Measures how graph structure affects private information recovery
Evaluates topology impact on PII vulnerability in agent interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework measures network topology impact on memory leakage
Two-phase protocol seeds and extracts private information systematically
Evaluates six topologies to map architecture to privacy risk
🔎 Similar Papers
No similar papers found.