GAMA: A General Anonymizing Multi-Agent System for Privacy Preservation Enhanced by Domain Rules and Disproof Method

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy risks arising from hosting large language models (LLMs) for multi-agent systems (MAS) on public remote servers, this paper proposes GAMA—a General Anonymous MAS framework. GAMA strictly separates private and public spaces to ensure sensitive data is processed locally, preventing leakage of raw private information. It introduces two novel mechanisms: Domain Rule Knowledge Enhancement (DRKE) and Disproof-based Logical Enhancement (DLE), which jointly mitigate semantic degradation under anonymization constraints. Extensive experiments across four NLP benchmarks demonstrate that GAMA outperforms existing methods in question-answering tasks while achieving state-of-the-art performance in both knowledge masking and logical privacy preservation. Notably, this work pioneers the integration of structured anonymization with collaborative multi-agent reasoning, establishing a verifiable and scalable paradigm for deploying privacy-sensitive LLM-based agents.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of Large Language Model (LLM), LLM-based agents exhibit exceptional abilities in understanding and generating natural language, facilitating human-like collaboration and information transmission in LLM-based Multi-Agent System (MAS). High-performance LLMs are often hosted on remote servers in public spaces. When tasks involve privacy data, MAS cannot securely utilize these LLMs without implementing privacy-preserving mechanisms. To address this challenge, we propose a General Anonymizing Multi-Agent system (GAMA), which divides the agents' workspace into private and public spaces and protects privacy through the anonymizing mechanism. In the private space, agents handle sensitive data, while in the public space, only anonymized data is utilized. GAMA incorporates two key modules to mitigate semantic loss caused by anonymization: Domain-Rule-based Knowledge Enhancement (DRKE) and Disproof-based Logic Enhancement (DLE). We evaluate GAMA on two public question-answering datasets: Trivia Creative Writing and Logic Grid Puzzle. The results demonstrate that GAMA has superior performance compared to the state-of-the-art models. To further assess its privacy-preserving capabilities, we designed two new datasets: Knowledge Privacy Preservation and Logic Privacy Preservation. The final results highlight GAMA's exceptional effectiveness in both task processing and privacy preservation.
Problem

Research questions and friction points this paper is trying to address.

Protecting privacy in multi-agent systems using LLMs
Anonymizing sensitive data between private and public spaces
Reducing semantic loss from anonymization with enhancement modules
Innovation

Methods, ideas, or system contributions that make the work stand out.

Divides workspace into private and public spaces
Uses Domain-Rule-based Knowledge Enhancement module
Implements Disproof-based Logic Enhancement module
🔎 Similar Papers
No similar papers found.
H
Hailong Yang
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
R
Renhuo Zhao
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
Z
Zhaohong Deng
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
Guanjin Wang
Guanjin Wang
Murdoch University
Machine LearningTransfer LearningExplainable AIHealth Data Analytics