FrankenBot: Brain-Morphic Modular Orchestration for Robotic Manipulation with Vision-Language Models

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current robotic systems struggle to jointly realize multi-cognitive capabilities—such as task planning, anomaly handling, and long-term memory—in complex, dynamic environments. To address this, we propose a vision-language-model (VLM)-driven brain-inspired robotic operating system. Our framework maps task planning, policy generation, memory management, and low-level control onto biologically inspired modules—cortex, cerebellum, temporal–hippocampal complex, and brainstem—establishing, for the first time, a fine-tuning-free correspondence between VLM knowledge and neuroanatomical functions. Leveraging hierarchical coordination and modular decoupling, the system achieves state-of-the-art generalization performance on both simulation and real-robot platforms without any task-specific fine-tuning: anomaly detection accuracy improves by 23.6%, long-horizon task success rate increases by 41.2%, and runtime efficiency rises by 35.8%.

Technology Category

Application Category

📝 Abstract
Developing a general robot manipulation system capable of performing a wide range of tasks in complex, dynamic, and unstructured real-world environments has long been a challenging task. It is widely recognized that achieving human-like efficiency and robustness manipulation requires the robotic brain to integrate a comprehensive set of functions, such as task planning, policy generation, anomaly monitoring and handling, and long-term memory, achieving high-efficiency operation across all functions. Vision-Language Models (VLMs), pretrained on massive multimodal data, have acquired rich world knowledge, exhibiting exceptional scene understanding and multimodal reasoning capabilities. However, existing methods typically focus on realizing only a single function or a subset of functions within the robotic brain, without integrating them into a unified cognitive architecture. Inspired by a divide-and-conquer strategy and the architecture of the human brain, we propose FrankenBot, a VLM-driven, brain-morphic robotic manipulation framework that achieves both comprehensive functionality and high operational efficiency. Our framework includes a suite of components, decoupling a part of key functions from frequent VLM calls, striking an optimal balance between functional completeness and system efficiency. Specifically, we map task planning, policy generation, memory management, and low-level interfacing to the cortex, cerebellum, temporal lobe-hippocampus complex, and brainstem, respectively, and design efficient coordination mechanisms for the modules. We conducted comprehensive experiments in both simulation and real-world robotic environments, demonstrating that our method offers significant advantages in anomaly detection and handling, long-term memory, operational efficiency, and stability -- all without requiring any fine-tuning or retraining.
Problem

Research questions and friction points this paper is trying to address.

Developing general robot manipulation for complex environments
Integrating multiple functions into unified cognitive architecture
Balancing functional completeness and system efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLM-driven brain-morphic robotic framework
Modular decoupling for functional efficiency
Biologically-inspired cognitive architecture mapping
🔎 Similar Papers
No similar papers found.