🤖 AI Summary
Existing membership inference attacks (MIAs) suffer significant performance degradation on small language models (SLMs), posing a critical gap in privacy risk assessment for resource-constrained settings. To address this, we propose *win-k*, a novel black-box MIA that extends *min-k* with a sliding-window mechanism to enhance discriminative power over low-confidence model outputs. Designed specifically for SLMs, *win-k* requires no access to model gradients or training data distribution—only query-level API responses. Extensive experiments across multiple SLMs (e.g., Phi-3, TinyLlama) and datasets (WikiText, C4) demonstrate that *win-k* consistently outperforms five state-of-the-art MIA baselines across standard metrics—including AUROC, TPR@1%FPR, and FPR@99%TPR—with particularly pronounced gains on models under 3B parameters (average AUROC improvement of +8.2%). This work constitutes the first systematic empirical validation of high membership inference vulnerability in SLMs and provides an efficient, lightweight, plug-and-play evaluation tool for privacy auditing.
📝 Abstract
Small language models (SLMs) are increasingly valued for their efficiency and deployability in resource-constrained environments, making them useful for on-device, privacy-sensitive, and edge computing applications. On the other hand, membership inference attacks (MIAs), which aim to determine whether a given sample was used in a model's training, are an important threat with serious privacy and intellectual property implications. In this paper, we study MIAs on SLMs. Although MIAs were shown to be effective on large language models (LLMs), they are relatively less studied on emerging SLMs, and furthermore, their effectiveness decreases as models get smaller. Motivated by this finding, we propose a new MIA called win-k, which builds on top of a state-of-the-art attack (min-k). We experimentally evaluate win-k by comparing it with five existing MIAs using three datasets and eight SLMs. Results show that win-k outperforms existing MIAs in terms of AUROC, TPR @ 1% FPR, and FPR @ 99% TPR metrics, especially on smaller models.