SOMA: Strategic Orchestration and Memory-Augmented System for Vision-Language-Action Model Robustness via In-Context Adaptation

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited robustness of vision–language–action (VLA) models in out-of-distribution (OOD) scenarios, stemming from their lack of long-term memory, causal attribution, and dynamic intervention capabilities. To overcome this without requiring parameter fine-tuning, we propose a context-adaptive augmentation framework that introduces, for the first time, a memory-augmented architecture endowed with causal reasoning and dynamic intervention mechanisms. The framework integrates contrastive dual-memory retrieval-augmented generation (RAG), an attribution-driven large language model coordinator, model context protocol (MCP)-based interventions, and an offline memory consolidation strategy. Evaluated on the LIBERO-PRO and LIBERO-SOMA benchmarks, our approach achieves an average absolute success rate improvement of 56.6% and boosts long-horizon task-chain success by 89.1%.

Technology Category

Application Category

📝 Abstract
Despite the promise of Vision-Language-Action (VLA) models as generalist robotic controllers, their robustness against perceptual noise and environmental variations in out-of-distribution (OOD) tasks remains fundamentally limited by the absence of long-term memory, causal failure attribution, and dynamic intervention capability. To address this, we propose SOMA, a Strategic Orchestration and Memory-Augmented System that upgrades frozen VLA policies for robust in-context adaptation without parameter fine-tuning. Specifically, SOMA operates through an online pipeline of contrastive Dual-Memory Retrieval-Augmented Generation (RAG), an Attribution-Driven Large-Language-Model (LLM) Orchestrator, and extensible Model Context Protocol (MCP) interventions, while an offline Memory Consolidation module continuously distills the execution traces into reliable priors. Experimental evaluations across three backbone models (pi0, pi0.5, and SmolVLA) on LIBERO-PRO and our proposed LIBERO-SOMA benchmarks demonstrate that SOMA achieves an average absolute success rate gain of 56.6%. This includes a significant absolute improvement of 89.1% in long-horizon task chaining. Project page and source code are available at: https://github.com/LZY-1021/SOMA.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
out-of-distribution robustness
perceptual noise
environmental variations
long-term memory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memory-Augmented System
In-Context Adaptation
Retrieval-Augmented Generation
Failure Attribution
Model Context Protocol
🔎 Similar Papers
No similar papers found.
Z
Zhuoran Li
Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University, China
Z
Zhiyang Li
Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University, China
K
Kaijun Zhou
Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University, China
Jinyu Gu
Jinyu Gu
Shanghai Jiao Tong University
Operating SystemSystem SecurityVirtualization