Win-k: Improved Membership Inference Attacks on Small Language Models

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing membership inference attacks (MIAs) suffer significant performance degradation on small language models (SLMs), posing a critical gap in privacy risk assessment for resource-constrained settings. To address this, we propose *win-k*, a novel black-box MIA that extends *min-k* with a sliding-window mechanism to enhance discriminative power over low-confidence model outputs. Designed specifically for SLMs, *win-k* requires no access to model gradients or training data distribution—only query-level API responses. Extensive experiments across multiple SLMs (e.g., Phi-3, TinyLlama) and datasets (WikiText, C4) demonstrate that *win-k* consistently outperforms five state-of-the-art MIA baselines across standard metrics—including AUROC, TPR@1%FPR, and FPR@99%TPR—with particularly pronounced gains on models under 3B parameters (average AUROC improvement of +8.2%). This work constitutes the first systematic empirical validation of high membership inference vulnerability in SLMs and provides an efficient, lightweight, plug-and-play evaluation tool for privacy auditing.

Technology Category

Application Category

📝 Abstract
Small language models (SLMs) are increasingly valued for their efficiency and deployability in resource-constrained environments, making them useful for on-device, privacy-sensitive, and edge computing applications. On the other hand, membership inference attacks (MIAs), which aim to determine whether a given sample was used in a model's training, are an important threat with serious privacy and intellectual property implications. In this paper, we study MIAs on SLMs. Although MIAs were shown to be effective on large language models (LLMs), they are relatively less studied on emerging SLMs, and furthermore, their effectiveness decreases as models get smaller. Motivated by this finding, we propose a new MIA called win-k, which builds on top of a state-of-the-art attack (min-k). We experimentally evaluate win-k by comparing it with five existing MIAs using three datasets and eight SLMs. Results show that win-k outperforms existing MIAs in terms of AUROC, TPR @ 1% FPR, and FPR @ 99% TPR metrics, especially on smaller models.
Problem

Research questions and friction points this paper is trying to address.

Improving membership inference attacks on small language models
Addressing decreased attack effectiveness in smaller models
Evaluating win-k against existing attacks on SLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes win-k attack for small models
Improves upon min-k state-of-the-art
Outperforms existing MIAs on SLMs
🔎 Similar Papers
No similar papers found.
R
Roya Arkhmammadova
Department of Computer Engineering, Koç University, Istanbul, Turkey
H
Hosein Madadi Tamar
Department of Computer Engineering, Koç University, Istanbul, Turkey
M. Emre Gursoy
M. Emre Gursoy
Assistant Professor of Computer Science, Koç University
PrivacySecurityAI SecurityMachine LearningInternet of Things