Memory Transfer Planning: LLM-driven Context-Aware Code Adaptation for Robot Manipulation

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing robotic manipulation methods exhibit poor generalization, struggling to adapt to novel environments without costly retraining or reliance on static prompts and single-shot code generation. Method: We propose the Memory-Transfer Planning (MTP) framework, which enables context-aware code adaptation and re-planning by retrieving procedurally annotated code examples successfully executed across diverse environments. MTP performs zero-shot knowledge transfer from simulation to real-world deployment without fine-tuning large language model parameters—relying solely on a retrieval–adaptation mechanism. Contribution/Results: Evaluated on RLBench, CALVIN, and real robot platforms, MTP significantly outperforms fixed-prompt and memory-free baselines in task success rate and environmental adaptability. Crucially, the memory bank constructed in simulation transfers directly to hardware deployment, enabling robust cross-domain generalization without additional training or parameter updates.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly explored in robot manipulation, but many existing methods struggle to adapt to new environments. Many systems require either environment-specific policy training or depend on fixed prompts and single-shot code generation, leading to limited transferability and manual re-tuning. We introduce Memory Transfer Planning (MTP), a framework that leverages successful control-code examples from different environments as procedural knowledge, using them as in-context guidance for LLM-driven planning. Specifically, MTP (i) generates an initial plan and code using LLMs, (ii) retrieves relevant successful examples from a code memory, and (iii) contextually adapts the retrieved code to the target setting for re-planning without updating model parameters. We evaluate MTP on RLBench, CALVIN, and a physical robot, demonstrating effectiveness beyond simulation. Across these settings, MTP consistently improved success rate and adaptability compared with fixed-prompt code generation, naive retrieval, and memory-free re-planning. Furthermore, in hardware experiments, leveraging a memory constructed in simulation proved effective. MTP provides a practical approach that exploits procedural knowledge to realize robust LLM-based planning across diverse robotic manipulation scenarios, enhancing adaptability to novel environments and bridging simulation and real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

Adapting robot manipulation code to new environments without retraining
Overcoming limited transferability of fixed-prompt code generation methods
Bridging simulation and real-world deployment through procedural knowledge transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages successful code examples as procedural knowledge
Retrieves relevant examples from memory for contextual adaptation
Adapts code without updating model parameters for re-planning
🔎 Similar Papers
No similar papers found.
T
Tomoyuki Kagaya
Panasonic Connect Co., Ltd., Japan
S
Subramanian Lakshmi
Panasonic R&D Center, Singapore
Y
Yuxuan Lou
National University of Singapore, Singapore
T
Thong Jing Yuan
Panasonic R&D Center, Singapore
J
Jayashree Karlekar
Panasonic R&D Center, Singapore
Sugiri Pranata
Sugiri Pranata
Panasonic R&D Center Singapore
N
Natsuki Murakami
Panasonic Connect Co., Ltd., Japan
A
Akira Kinose
Panasonic Connect Co., Ltd., Japan
Yang You
Yang You
Postdoc, Stanford University
3D visioncomputer graphicscomputational geometry