User Privacy and Large Language Models: An Analysis of Frontier Developers' Privacy Policies

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies critical privacy risks in large language model (LLM) development, including opaque data collection practices, unauthorized use of children’s data, inadequate handling of sensitive information, and indefinite retention of user chat data—stemming from industry-wide default assumptions that user inputs may be used for model training. Method: Leveraging the California Consumer Privacy Act (CCPA), we develop the first qualitative coding framework tailored to LLM training contexts and conduct a cross-sectional content analysis and legal interpretation of privacy policies from six leading AI companies. Contribution/Results: We systematically demonstrate that none provide an effective opt-out mechanism for training data usage; four explicitly process children’s data; five impose no storage duration limits; and all employ ambiguous language while concealing key data practices. The framework serves as an actionable assessment tool for regulators and informs industry-wide reform toward “collection-by-exception” and purpose limitation principles.

Technology Category

Application Category

📝 Abstract
Hundreds of millions of people now regularly interact with large language models via chatbots. Model developers are eager to acquire new sources of high-quality training data as they race to improve model capabilities and win market share. This paper analyzes the privacy policies of six U.S. frontier AI developers to understand how they use their users' chats to train models. Drawing primarily on the California Consumer Privacy Act, we develop a novel qualitative coding schema that we apply to each developer's relevant privacy policies to compare data collection and use practices across the six companies. We find that all six developers appear to employ their users' chat data to train and improve their models by default, and that some retain this data indefinitely. Developers may collect and train on personal information disclosed in chats, including sensitive information such as biometric and health data, as well as files uploaded by users. Four of the six companies we examined appear to include children's chat data for model training, as well as customer data from other products. On the whole, developers' privacy policies often lack essential information about their practices, highlighting the need for greater transparency and accountability. We address the implications of users' lack of consent for the use of their chat data for model training, data security issues arising from indefinite chat data retention, and training on children's chat data. We conclude by providing recommendations to policymakers and developers to address the data privacy challenges posed by LLM-powered chatbots.
Problem

Research questions and friction points this paper is trying to address.

Analyzing privacy policies of AI developers for data usage
Examining user chat data collection for model training
Addressing lack of transparency in data retention practices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed qualitative coding schema using privacy laws
Analyzed six companies' data collection practices comparatively
Assessed user chat data usage for model training
🔎 Similar Papers
No similar papers found.
J
Jennifer King
Stanford University
Kevin Klyman
Kevin Klyman
Stanford, Harvard
Foundation ModelsAI RegulationGeopolitics
E
Emily Capstick
Stanford University
T
Tiffany Saade
Stanford University
V
Victoria Hsieh
Stanford University