๐ค AI Summary
This work addresses the challenges of excessive prompt length and high prefill latency in existing text-based memory approaches for embodied planning, as well as the inefficiency of current KV cache reuse strategies due to frequent updates. To overcome these limitations, the authors propose a KV cacheโcentric memory management system that introduces three key innovations: a static-dynamic hybrid granularity memory construction, a multi-hop memory recomputation mechanism, and a hierarchical balanced loading strategy. These components collectively mitigate redundant recomputation and load imbalance in the cache. Experimental results on the ALFRED dataset demonstrate that the proposed method achieves a 2.68ร speedup over text-based memory with negligible accuracy loss, and outperforms CacheBlend by a 4.13% improvement in task success rate while reducing first-token generation time by 1.90ร.
๐ Abstract
Memory-augmented Large Language Models (LLMs) have demonstrated remarkable capability for complex and long-horizon embodied planning. By keeping track of past experiences and environmental states, memory enables LLMs to maintain a global view, thereby avoiding repetitive exploration. However, existing approaches often store the memory as raw text, leading to excessively long prompts and high prefill latency. While it is possible to store and reuse the KV caches, the efficiency benefits are greatly undermined due to frequent KV cache updates. In this paper, we propose KEEP, a KV-cache-centric memory management system for efficient embodied planning. KEEP features 3 key innovations: (1) a Static-Dynamic Memory Construction algorithm that reduces KV cache recomputation by mixed-granularity memory group; (2) a Multi-hop Memory Re-computation algorithm that dynamically identifies important cross-attention among different memory groups and reconstructs memory interactions iteratively; (3) a Layer-balanced Memory Loading that eliminates unbalanced KV cache loading and cross-attention computation across different layers. Extensive experimental results have demonstrated that KEEP achieves 2.68x speedup with negligible accuracy loss compared with text-based memory methods on ALFRED dataset. Compared with the KV re-computation method CacheBlend (EuroSys'25), KEEP shows 4.13% success rate improvement and 1.90x time-to-first-token (TTFT) reduction. Our code is available on https://github.com/PKU-SEC-Lab/KEEP_Embodied_Memory.