๐ค AI Summary
This work systematically investigates, for the first time, whether large language models (LLMs) memorize publicly available recommendation datasets (e.g., MovieLens-1M) during pretrainingโand how such memorization affects recommendation performance and bias. We propose a prompt-engineering-based memory detection framework and conduct structured retrieval experiments across multiple GPT and Llama model sizes to quantify the recoverability of user profiles, item attributes, and interaction histories. Results show that all tested models exhibit non-negligible memorization; memory strength positively correlates with zero-shot recommendation accuracy but simultaneously exacerbates popularity bias. Moreover, memorization increases with model scale and exhibits architecture-dependent patterns. This study establishes a novel, trustworthy evaluation paradigm for LLMs in recommendation settings. The code is publicly released.
๐ Abstract
Large Language Models (LLMs) have become increasingly central to recommendation scenarios due to their remarkable natural language understanding and generation capabilities. Although significant research has explored the use of LLMs for various recommendation tasks, little effort has been dedicated to verifying whether they have memorized public recommendation dataset as part of their training data. This is undesirable because memorization reduces the generalizability of research findings, as benchmarking on memorized datasets does not guarantee generalization to unseen datasets. Furthermore, memorization can amplify biases, for example, some popular items may be recommended more frequently than others. In this work, we investigate whether LLMs have memorized public recommendation datasets. Specifically, we examine two model families (GPT and Llama) across multiple sizes, focusing on one of the most widely used dataset in recommender systems: MovieLens-1M. First, we define dataset memorization as the extent to which item attributes, user profiles, and user-item interactions can be retrieved by prompting the LLMs. Second, we analyze the impact of memorization on recommendation performance. Lastly, we examine whether memorization varies across model families and model sizes. Our results reveal that all models exhibit some degree of memorization of MovieLens-1M, and that recommendation performance is related to the extent of memorization. We have made all the code publicly available at: https://github.com/sisinflab/LLM-MemoryInspector