The AI Memory Gap: Users Misremember What They Created With AI or Without

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical cognitive risk in human-AI collaborative text authoring: users exhibit significantly impaired source memory—specifically, reduced accuracy in distinguishing self-generated from AI-generated content—following LLM-assisted writing, with the most severe confusion occurring in hybrid workflows (alternating AI and non-AI content). Method: We conducted a preregistered psychological experiment combining behavioral measurements of source attribution accuracy with computational modeling grounded in dual-process memory theory. Contribution/Results: We provide the first quantitative evidence of AI-induced systematic source memory bias: LLM use reduces source identification accuracy by 23% on average, with highest misattribution rates in hybrid conditions. Our computational model successfully replicates this bias, confirming it arises from weakened memory encoding rather than decision-level noise. These findings constitute the first empirical basis for designing interactive AI systems with built-in source-provenance mechanisms to mitigate memory failure.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) become embedded in interactive text generation, disclosure of AI as a source depends on people remembering which ideas or texts came from themselves and which were created with AI. We investigate how accurately people remember the source of content when using AI. In a pre-registered experiment, 184 participants generated and elaborated on ideas both unaided and with an LLM-based chatbot. One week later, they were asked to identify the source (noAI vs withAI) of these ideas and texts. Our findings reveal a significant gap in memory: After AI use, the odds of correct attribution dropped, with the steepest decline in mixed human-AI workflows, where either the idea or elaboration was created with AI. We validated our results using a computational model of source memory. Discussing broader implications, we highlight the importance of considering source confusion in the design and use of interactive text generation technologies.
Problem

Research questions and friction points this paper is trying to address.

Investigating memory accuracy for AI-generated content attribution
Measuring source confusion between human and AI-created text
Examining memory decline in mixed human-AI workflow scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-registered experiment with human-AI collaboration
Computational model analyzing source memory accuracy
Mixed workflow analysis revealing attribution decline patterns
🔎 Similar Papers
No similar papers found.