🤖 AI Summary
This work addresses the limited robustness of vision–language–action (VLA) models in out-of-distribution (OOD) scenarios, stemming from their lack of long-term memory, causal attribution, and dynamic intervention capabilities. To overcome this without requiring parameter fine-tuning, we propose a context-adaptive augmentation framework that introduces, for the first time, a memory-augmented architecture endowed with causal reasoning and dynamic intervention mechanisms. The framework integrates contrastive dual-memory retrieval-augmented generation (RAG), an attribution-driven large language model coordinator, model context protocol (MCP)-based interventions, and an offline memory consolidation strategy. Evaluated on the LIBERO-PRO and LIBERO-SOMA benchmarks, our approach achieves an average absolute success rate improvement of 56.6% and boosts long-horizon task-chain success by 89.1%.
📝 Abstract
Despite the promise of Vision-Language-Action (VLA) models as generalist robotic controllers, their robustness against perceptual noise and environmental variations in out-of-distribution (OOD) tasks remains fundamentally limited by the absence of long-term memory, causal failure attribution, and dynamic intervention capability. To address this, we propose SOMA, a Strategic Orchestration and Memory-Augmented System that upgrades frozen VLA policies for robust in-context adaptation without parameter fine-tuning. Specifically, SOMA operates through an online pipeline of contrastive Dual-Memory Retrieval-Augmented Generation (RAG), an Attribution-Driven Large-Language-Model (LLM) Orchestrator, and extensible Model Context Protocol (MCP) interventions, while an offline Memory Consolidation module continuously distills the execution traces into reliable priors. Experimental evaluations across three backbone models (pi0, pi0.5, and SmolVLA) on LIBERO-PRO and our proposed LIBERO-SOMA benchmarks demonstrate that SOMA achieves an average absolute success rate gain of 56.6%. This includes a significant absolute improvement of 89.1% in long-horizon task chaining. Project page and source code are available at: https://github.com/LZY-1021/SOMA.