PrivacyPAD: A Reinforcement Learning Framework for Dynamic Privacy-Aware Delegation

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Users face an inherent trade-off between privacy and performance in LLM applications: uploading sensitive prompts to powerful cloud-based models risks data leakage, whereas privacy-preserving local small models suffer from inadequate capability. Existing static prompt rewriting methods impair semantic coherence and indiscriminately remove critical PII. Method: This work pioneers modeling privacy-aware prompt delegation as a sequential decision-making problem. We propose a reinforcement learning–based adaptive text chunking and routing framework that dynamically discriminates between maskable PII and task-critical information, jointly optimizing local masking and remote API invocation for fine-grained privacy–utility trade-offs. Contribution/Results: Evaluated on a high-density PII medical dataset, our approach significantly outperforms baselines, achieving state-of-the-art balance between privacy protection strength (e.g., PII anonymization fidelity) and downstream task performance (e.g., clinical QA accuracy).

Technology Category

Application Category

📝 Abstract
When users submit queries to Large Language Models (LLMs), their prompts can often contain sensitive data, forcing a difficult choice: Send the query to a powerful proprietary LLM providers to achieving state-of-the-art performance and risk data exposure, or relying on smaller, local models guarantees data privacy but often results in a degradation of task performance. Prior approaches have relied on static pipelines that use LLM rewriting, which shatters linguistic coherence and indiscriminately removes privacy-sensitive information, including task-critical content. We reformulate this challenge (Privacy-Conscious Delegation) as a sequential decision-making problem and introduce a novel reinforcement learning (RL) framework called PrivacyPAD to solve it. Our framework trains an agent to dynamically route text chunks, learning a policy that optimally balances the trade-off between privacy leakage and task performance. It implicitly distinguishes between replaceable Personally Identifiable Information (PII) (which it shields locally) and task-critical PII (which it strategically sends to the remote model for maximal utility). To validate our approach in complex scenarios, we also introduce a new medical dataset with high PII density. Our framework achieves a new state-of-the-art on the privacy-utility frontier, demonstrating the necessity of learned, adaptive policies for deploying LLMs in sensitive environments.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy protection with task performance in LLM query delegation
Dynamically routing sensitive data between local and remote models
Distinguishing replaceable from task-critical private information in prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for dynamic query routing
Learns policy balancing privacy leakage and performance
Distinguishes between replaceable and task-critical PII
🔎 Similar Papers
No similar papers found.