SRPG: Semantically Reconstructed Privacy Guard for Zero-Trust Privacy in Educational Multi-Agent Systems

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing privacy-preserving methods for educational multi-agent systems struggle to prevent PII leakage from LLM-mediated dialogues involving minors while preserving pedagogical semantic integrity. Method: We propose a Semantic Reconstruction Privacy Protection mechanism featuring a novel dual-stream architecture: a sanitization stream rigorously detects and de-identifies PII in unstructured dialogues, and a reconstruction stream leverages large language models to restore mathematical pedagogical logic and contextual coherence—decoupling privacy sanitization from semantic recovery. Contribution/Results: The mechanism guarantees zero sensitive-data leakage under a zero-trust paradigm while retaining instructional utility. Experiments on the MathDial dataset show that, when integrated with GPT-4o, it achieves 0% attack success rate and an exact-match accuracy of 0.8267—significantly outperforming pure-LLM baselines.

Technology Category

Application Category

📝 Abstract
Multi-Agent Systems (MAS) with large language models (LLMs) enable personalized education but risk leaking minors personally identifiable information (PII) via unstructured dialogue. Existing privacy methods struggle to balance security and utility: role-based access control fails on unstructured text, while naive masking destroys pedagogical context. We propose SRPG, a privacy guard for educational MAS, using a Dual-Stream Reconstruction Mechanism: a strict sanitization stream ensures zero PII leakage, and a context reconstruction stream (LLM driven) recovers mathematical logic. This decouples instructional content from private data, preserving teaching efficacy. Tests on MathDial show SRPG works across models; with GPT-4o, it achieves 0.0000 Attack Success Rate (ASR) (zero leakage) and 0.8267 Exact Match, far outperforming the zero trust Pure LLM baseline (0.2138). SRPG effectively protects minors privacy without sacrificing mathematical instructional quality.
Problem

Research questions and friction points this paper is trying to address.

Protecting minors' personal information in educational AI dialogues
Balancing privacy security with teaching context preservation
Preventing PII leakage in unstructured multi-agent learning systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Stream Reconstruction Mechanism separates sanitization and context recovery
LLM-driven context reconstruction preserves mathematical logic after PII removal
Achieves zero PII leakage while maintaining high instructional accuracy
🔎 Similar Papers
No similar papers found.
Shuang Guo
Shuang Guo
Technische Universität Berlin
Event CamerasEvent-based VisionEvent-based SLAMState EstimationNumerical Optimization
Z
Zihui Li
Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan, China