Stateless Yet Not Forgetful: Implicit Memory as a Hidden Channel in LLMs

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although large language models (LLMs) are conventionally assumed to be stateless, they can implicitly encode information in their outputs and later recover it across interactions, thereby establishing a cross-turn “implicit memory” channel that challenges this assumption. This work introduces the concept of implicit memory for the first time, leveraging prompt engineering and fine-tuning to induce and systematically analyze its formation and activation mechanisms. Building on these insights, we design a novel temporal backdoor—dubbed the “time bomb”—which demonstrates that persistent cross-interaction information storage is achievable without any explicit memory module. Our findings reveal significant implications for the security and reliability of LLMs, and we release our code and data to foster further research in this emerging area.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are commonly treated as stateless: once an interaction ends, no information is assumed to persist unless it is explicitly stored and re-supplied. We challenge this assumption by introducing implicit memory-the ability of a model to carry state across otherwise independent interactions by encoding information in its own outputs and later recovering it when those outputs are reintroduced as input. This mechanism does not require any explicit memory module, yet it creates a persistent information channel across inference requests. As a concrete demonstration, we introduce a new class of temporal backdoors, which we call time bombs. Unlike conventional backdoors that activate on a single trigger input, time bombs activate only after a sequence of interactions satisfies hidden conditions accumulated via implicit memory. We show that such behavior can be induced today through straightforward prompting or fine-tuning. Beyond this case study, we analyze broader implications of implicit memory, including covert inter-agent communication, benchmark contamination, targeted manipulation, and training-data poisoning. Finally, we discuss detection challenges and outline directions for stress-testing and evaluation, with the goal of anticipating and controlling future developments. To promote future research, we release code and data at: https://github.com/microsoft/implicitMemory.
Problem

Research questions and friction points this paper is trying to address.

implicit memory
stateless assumption
temporal backdoors
covert communication
benchmark contamination
Innovation

Methods, ideas, or system contributions that make the work stand out.

implicit memory
stateless LLMs
temporal backdoors
time bombs
covert communication
🔎 Similar Papers
No similar papers found.