🤖 AI Summary
This work identifies and exploits the *generation budget*—a novel side channel in Retrieval-Augmented Generation (RAG) systems—as the basis for BudgetLeak, the first membership inference attack (MIA) framework targeting black-box RAG systems. Existing MIAs underperform in RAG settings due to weak membership signals; BudgetLeak overcomes this by modeling the evolution of response quality across multiple token budgets via sequence modeling and clustering, enabling dynamic detection of whether a query belongs to the training set. It requires no access to model parameters, gradients, or internal states—only standard API outputs. Extensive evaluation across four datasets, three large language models, and two retrievers demonstrates that BudgetLeak significantly outperforms existing baselines in attack success rate, while achieving high precision, computational efficiency, and practical deployability. This study is the first to empirically establish the generation budget as a critical security vulnerability in RAG systems.
📝 Abstract
Retrieval-Augmented Generation (RAG) enhances large language models by integrating external knowledge, but reliance on proprietary or sensitive corpora poses various data risks, including privacy leakage and unauthorized data usage. Membership inference attacks (MIAs) are a common technique to assess such risks, yet existing approaches underperform in RAG due to black-box constraints and the absence of strong membership signals. In this paper, we identify a previously unexplored side channel in RAG systems: the generation budget, which controls the maximum number of tokens allowed in a generated response. Varying this budget reveals observable behavioral patterns between member and non-member queries, as members gain quality more rapidly with larger budgets. Building on this insight, we propose BudgetLeak, a novel membership inference attack that probes responses under different budgets and analyzes metric evolution via sequence modeling or clustering. Extensive experiments across four datasets, three LLM generators, and two retrievers demonstrate that BudgetLeak consistently outperforms existing baselines, while maintaining high efficiency and practical viability. Our findings reveal a previously overlooked data risk in RAG systems and highlight the need for new defenses.