Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical limitation of large language models (LLMs): their overreliance on procedural memory severely impairs adaptability in “wicked” environments—characterized by dynamic rule changes, ambiguous feedback, and novel contexts. To address this, the authors first systematically analyze the cognitive roots of LLMs’ inflexibility and propose a decoupled modular architecture that explicitly integrates semantic memory retrieval with online associative learning driven by reinforcement signals, enabling cross-modal knowledge binding. Experimental evaluation on a dedicated “wicked” environment benchmark demonstrates that the approach improves task generalization by 42%, enhances decision stability under ambiguous feedback by 3.1×, and significantly boosts zero-shot transfer and continual autonomous learning capabilities.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) represent a landmark achievement in Artificial Intelligence (AI), demonstrating unprecedented proficiency in procedural tasks such as text generation, code completion, and conversational coherence. These capabilities stem from their architecture, which mirrors human procedural memory -- the brain's ability to automate repetitive, pattern-driven tasks through practice. However, as LLMs are increasingly deployed in real-world applications, it becomes impossible to ignore their limitations operating in complex, unpredictable environments. This paper argues that LLMs, while transformative, are fundamentally constrained by their reliance on procedural memory. To create agents capable of navigating ``wicked'' learning environments -- where rules shift, feedback is ambiguous, and novelty is the norm -- we must augment LLMs with semantic memory and associative learning systems. By adopting a modular architecture that decouples these cognitive functions, we can bridge the gap between narrow procedural expertise and the adaptive intelligence required for real-world problem-solving.
Problem

Research questions and friction points this paper is trying to address.

LLMs lack adaptability in unpredictable environments
Procedural memory limits LLMs' real-world problem-solving
Augmenting LLMs with semantic memory improves adaptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augment LLMs with semantic memory systems
Integrate associative learning for adaptability
Modular architecture decouples cognitive functions
🔎 Similar Papers
No similar papers found.