π€ AI Summary
This study investigates how the attention mechanism in Transformer models emulates human memory processes, particularly cue-based retrieval and contextual construction. Inspired by the psychological principle of encoding specificity, it introduces the novel hypothesis that βkeywords serve as retrieval cues.β By integrating methods from explainable artificial intelligence (XAI), computational psycholinguistics, attention analysis, and neuron activation tracing, the research identifies specific neurons responsible for encoding and retrieving such keywords. The approach successfully extracts keywords that exhibit strong alignment with contextual definitions, offering a new pathway to enhance the transparency and accountability of large language models and enabling applications such as machine unlearning.
π Abstract
While explainable artificial intelligence (XAI) for large language models (LLMs) remains an evolving field with many unresolved questions, increasing regulatory pressures have spurred interest in its role in ensuring transparency, accountability, and privacy-preserving machine unlearning. Despite recent advances in XAI have provided some insights, the specific role of attention layers in transformer based LLMs remains underexplored. This study investigates the memory mechanisms instantiated by attention layers, drawing on prior research in psychology and computational psycholinguistics that links Transformer attention to cue based retrieval in human memory. In this view, queries encode the retrieval context, keys index candidate memory traces, attention weights quantify cue trace similarity, and values carry the encoded content, jointly enabling the construction of a context representation that precedes and facilitates memory retrieval. Guided by the Encoding Specificity Principle, we hypothesize that the cues used in the initial stage of retrieval are instantiated as keywords. We provide converging evidence for this keywords-as-cues hypothesis. In addition, we isolate neurons within attention layers whose activations selectively encode and facilitate the retrieval of context-defining keywords. Consequently, these keywords can be extracted from identified neurons and further contribute to downstream applications such as unlearning.