Learning Hierarchical Procedural Memory for LLM Agents through Bayesian Selection and Contrastive Refinement

πŸ“… 2025-12-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language model (LLM) agents face challenges in achieving sample-efficient, interpretable, and continual evolution without parameter updates. Method: This paper proposes a purely external, hierarchical program memory system that extracts reusable program modules from execution trajectories, integrates Bayesian reliability modeling with expected-utility-driven action selection, and introduces a success/failure contrastive refinement mechanism for zero-parameter-update autonomous learning. Contribution/Results: We pioneer the β€œfrozen-LLM + external program memory” paradigm, unifying program abstraction compression and Bayesian decision-making. Evaluated on four benchmarks, our approach achieves an average performance of 78.1%. On ALFWorld, zero-shot generalization to unseen tasks improves to 90.3% (+3.1%), memory construction accelerates by 2800Γ—, and 2,851 trajectories are compressed into 187 high-efficiency programs.

Technology Category

Application Category

πŸ“ Abstract
We present MACLA, a framework that decouples reasoning from learning by maintaining a frozen large language model while performing all adaptation in an external hierarchical procedural memory. MACLA extracts reusable procedures from trajectories, tracks reliability via Bayesian posteriors, selects actions through expected-utility scoring, and refines procedures by contrasting successes and failures. Across four benchmarks (ALFWorld, WebShop, TravelPlanner, InterCodeSQL), MACLA achieves 78.1 percent average performance, outperforming all baselines. On ALFWorld unseen tasks, MACLA reaches 90.3 percent with 3.1 percent positive generalization. The system constructs memory in 56 seconds, 2800 times faster than the state-of-the-art LLM parameter-training baseline, compressing 2851 trajectories into 187 procedures. Experimental results demonstrate that structured external memory with Bayesian selection and contrastive refinement enables sample-efficient, interpretable, and continually improving agents without LLM parameter updates.
Problem

Research questions and friction points this paper is trying to address.

Enables sample-efficient agents without LLM parameter updates
Extracts reusable procedures from trajectories via Bayesian selection
Refines procedures by contrasting successes and failures
Innovation

Methods, ideas, or system contributions that make the work stand out.

External hierarchical procedural memory for LLM agents
Bayesian posterior tracking for reliability and selection
Contrastive refinement of procedures from successes and failures