Auditing Prompt Caching in Language Model APIs

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a cross-user temporal side-channel privacy risk in large language model (LLM) APIs stemming from shared prompt caching: attackers can infer other users’ prompts—and even undisclosed model architectural details—by measuring response-time differentials. The authors propose the first black-box statistical auditing framework, integrating high-precision timing measurements, hypothesis testing, and significance analysis to empirically detect globally shared caches across seven major LLM APIs, including OpenAI’s. Crucially, their audit reveals—for the first time—that OpenAI’s embedding models employ a decoder-only Transformer architecture, and demonstrates that cache-induced timing variations directly enable prompt leakage. Beyond systematically uncovering a previously overlooked cache-related security vulnerability in production LLM services, this work establishes a reproducible, principled auditing methodology. It provides actionable insights for API providers to refine caching policies and strengthen privacy guarantees against timing-based inference attacks.

Technology Category

Application Category

📝 Abstract
Prompt caching in large language models (LLMs) results in data-dependent timing variations: cached prompts are processed faster than non-cached prompts. These timing differences introduce the risk of side-channel timing attacks. For example, if the cache is shared across users, an attacker could identify cached prompts from fast API response times to learn information about other users' prompts. Because prompt caching may cause privacy leakage, transparency around the caching policies of API providers is important. To this end, we develop and conduct statistical audits to detect prompt caching in real-world LLM API providers. We detect global cache sharing across users in seven API providers, including OpenAI, resulting in potential privacy leakage about users' prompts. Timing variations due to prompt caching can also result in leakage of information about model architecture. Namely, we find evidence that OpenAI's embedding model is a decoder-only Transformer, which was previously not publicly known.
Problem

Research questions and friction points this paper is trying to address.

Detects prompt caching in LLM APIs
Identifies privacy leakage risks
Reveals model architecture details
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical audits detect prompt caching
Global cache sharing risks privacy leakage
Timing variations reveal model architecture
🔎 Similar Papers
No similar papers found.