What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data

๐Ÿ“… 2026-02-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Users lack both awareness of how large language models (LLMs) associate their names with personal data and effective means to control such associations. This work proposes the first user-centered privacy auditing framework to evaluate the ability of eight mainstream LLMs to link personally identifiable information, integrating black-box probing, human-AI interaction, and a large-scale user study (N=478) with our custom tool LMP2. Our findings reveal that GPT-4o can infer 11 sensitive attributes about ordinary users with high accuracy (โ‰ฅ60%) from just their names. Moreover, 72% of participants expressed a strong desire to control what information models associate with their names, challenging prevailing definitions of personal data and the boundaries of privacy in the context of generative AI.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs), and conversational agents based on them, are exposed to personal data (PD) during pre-training and during user interactions. Prior work shows that PD can resurface, yet users lack insight into how strongly models associate specific information to their identity. We audit PD across eight LLMs (3 open-source; 5 API-based, including GPT-4o), introduce LMP2 (Language Model Privacy Probe), a human-centered, privacy-preserving audit tool refined through two formative studies (N=20), and run two studies with EU residents to capture (i) intuitions about LLM-generated PD (N1=155) and (ii) reactions to tool output (N2=303). We show empirically that models confidently generate multiple PD categories for well-known individuals. For everyday users, GPT-4o generates 11 features with 60% or more accuracy (e.g., gender, hair color, languages). Finally, 72% of participants sought control over model-generated associations with their name, raising questions about what counts as PD and whether data privacy rights should extend to LLMs.
Problem

Research questions and friction points this paper is trying to address.

personal data
large language models
privacy
identity association
data rights
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM privacy audit
personal data association
human-centered AI
black-box probing
LMP2
๐Ÿ”Ž Similar Papers
No similar papers found.