Hubble: a Model Suite to Advance the Study of LLM Memorization

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unintended memorization of sensitive training data by large language models (LLMs). To systematically investigate the quantitative relationships among data frequency, corpus size, exposure timing, and memorization strength, we propose a controlled text insertion perturbation training framework. Our analysis reveals that early data dilution and phased injection significantly suppress memorization. Building upon these findings, we release the open-source Hubble model series—comprising eight base models and six perturbed variants—designed to support privacy evaluation tasks such as membership inference and machine unlearning. Empirical results demonstrate that memorization strength decays across training epochs and, for the first time, quantify differential memorization levels across distinct categories of private information. The Hubble suite establishes a reproducible benchmark and methodological foundation for controllable memorization modeling and privacy-enhancing LLM training.

Technology Category

Application Category

📝 Abstract
We present Hubble, a suite of fully open-source large language models (LLMs) for the scientific study of LLM memorization. Hubble models come in standard and perturbed variants: standard models are pretrained on a large English corpus, and perturbed models are trained in the same way but with controlled insertion of text (e.g., book passages, biographies, and test sets) designed to emulate key memorization risks. Our core release includes 8 models -- standard and perturbed models with 1B or 8B parameters, pretrained on 100B or 500B tokens -- establishing that memorization risks are determined by the frequency of sensitive data relative to size of the training corpus (i.e., a password appearing once in a smaller corpus is memorized better than the same password in a larger corpus). Our release also includes 6 perturbed models with text inserted at different pretraining phases, showing that sensitive data without continued exposure can be forgotten. These findings suggest two best practices for addressing memorization risks: to dilute sensitive data by increasing the size of the training corpus, and to order sensitive data to appear earlier in training. Beyond these general empirical findings, Hubble enables a broad range of memorization research; for example, analyzing the biographies reveals how readily different types of private information are memorized. We also demonstrate that the randomized insertions in Hubble make it an ideal testbed for membership inference and machine unlearning, and invite the community to further explore, benchmark, and build upon our work.
Problem

Research questions and friction points this paper is trying to address.

Hubble studies LLM memorization risks through controlled text insertion
It examines how training corpus size affects sensitive data retention
The suite enables research on membership inference and machine unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source LLM suite for memorization study
Standard and perturbed models with controlled text insertion
Analyzes memorization risks via frequency and training phase
🔎 Similar Papers
No similar papers found.