Generalizable Self-Evolving Memory for Automatic Prompt Optimization

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing automatic prompt optimization methods, which struggle to accumulate and reuse prompt knowledge across tasks. To overcome this, we propose MemAPO, a novel framework that models prompt optimization as a self-evolving process of experiential learning. MemAPO introduces a dual-memory mechanism that separately stores successful reasoning strategies and error patterns, enabling cross-task knowledge reuse through memory retrieval, self-reflection, and iterative updating. Experimental results demonstrate that MemAPO significantly outperforms state-of-the-art methods across multiple benchmarks while substantially reducing optimization costs.

Technology Category

Application Category

📝 Abstract
Automatic prompt optimization is a promising approach for adapting large language models (LLMs) to downstream tasks, yet existing methods typically search for a specific prompt specialized to a fixed task. This paradigm limits generalization across heterogeneous queries and prevents models from accumulating reusable prompting knowledge over time. In this paper, we propose MemAPO, a memory-driven framework that reconceptualizes prompt optimization as generalizable and self-evolving experience accumulation. MemAPO maintains a dual-memory mechanism that distills successful reasoning trajectories into reusable strategy templates while organizing incorrect generations into structured error patterns that capture recurrent failure modes. Given a new prompt, the framework retrieves both relevant strategies and failure patterns to compose prompts that promote effective reasoning while discouraging known mistakes. Through iterative self-reflection and memory editing, MemAPO continuously updates its memory, enabling prompt optimization to improve over time rather than restarting from scratch for each task. Experiments on diverse benchmarks show that MemAPO consistently outperforms representative prompt optimization baselines while substantially reducing optimization cost.
Problem

Research questions and friction points this paper is trying to address.

prompt optimization
generalization
memory
large language models
knowledge accumulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

memory-driven prompt optimization
self-evolving memory
strategy templates
error patterns
generalizable prompting
🔎 Similar Papers
No similar papers found.
G
Guanbao Liang
Zhejiang University
Y
Yuanchen Bei
University of Illinois Urbana-Champaign
Sheng Zhou
Sheng Zhou
Zhejiang University
Data Mining
Y
Yuheng Qin
Alibaba Group
Huan Zhou
Huan Zhou
Northwestern Polytechnical University
Mobile Edge ComputingFederated LearningMobile Social NetworksVANETsData Offloading
B
Bingxin Jia
Alibaba Group
Bin Li
Bin Li
Microsoft Research
video codingvideo transmissionHEVC
J
Jiajun Bu
Zhejiang University