π€ AI Summary
This work addresses the privacy risks posed by shared key-value caching in multi-tenant large language model services, where prompt-side channel leakage enables adversarial reconstruction of user inputs. Existing attack methods suffer from low efficiency and insufficient accuracy in risk assessment. To overcome these limitations, the authors propose OptiLeak, a novel framework that integrates reinforcement learning with Direct Preference Optimization (DPO) to reconstruct prompts through a two-stage strategy. First, it automatically identifies sensitive βhard tokensβ via likelihood ranking; then, it constructs preference pairs for DPO training, eliminating the need for manual annotation and avoiding overfitting associated with supervised fine-tuning. Experiments across three benchmarks in healthcare and finance demonstrate that OptiLeak reduces the average number of queries per token by up to 12.48Γ compared to baselines and achieves consistent performance gains across models ranging from 3B to 14B parameters.
π Abstract
Multi-tenant LLM serving frameworks widely adopt shared Key-Value caches to enhance efficiency. However, this creates side-channel vulnerabilities enabling prompt leakage attacks. Prior studies identified these attack surfaces yet focused on expanding attack vectors rather than optimizing attack performance, reporting impractically high attack costs that underestimate the true privacy risk. We propose OptiLeak, a reinforcement learning-enhanced framework that maximizes prompt reconstruction efficiency through two-stage fine-tuning. Our key insight is that domain-specific ``hard tokens''-- terms difficult to predict yet carrying sensitive information -- can be automatically identified via likelihood ranking and used to construct preference pairs for Direct Preference Optimization, eliminating manual annotation. This enables effective preference alignment while avoiding the overfitting issues of extended supervised fine-tuning. Evaluated on three benchmarks spanning medical and financial domains, OptiLeak achieves up to $12.48\times$ reduction in average requests per token compared to baseline approaches, with consistent improvements across model scales from 3B to 14B parameters. Our findings demonstrate that cache-based prompt leakage poses a more severe threat than previously reported, underscoring the need for robust cache isolation in production deployments.