SaiVLA-0: Cerebrum--Pons--Cerebellum Tripartite Architecture for Compute-Aware Vision-Language-Action

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing vision-language-action (VLA) models in computational efficiency, modularity, and real-time control stability by proposing a neuroscientifically inspired cerebrum–pons–cerebellum tripartite architecture. The frozen Cerebrum handles high-level semantics, the Pons Adapter fuses perceptual inputs with task intent, and the Cerebellum (ParaCAT) enables rapid parallel decoding. Key innovations include fixed-ratio scheduling, a two-level feature caching mechanism, an active foveation strategy with geometrically calibrated wrist-centric regions of interest, and stability-enhancing techniques such as lag compensation, exponential moving average (EMA), temperature scaling, and entropy regularization. This modular design permits independent component updates, achieving an average success rate improvement from 86.5% to 92.5% on the LIBERO benchmark while reducing training time from 7.5 to 4.5 hours; the SaiVLA-0 variant attains a 99.0% success rate.

Technology Category

Application Category

📝 Abstract
We revisit Vision-Language-Action through a neuroscience-inspired triad. Biologically, the Cerebrum provides stable high-level multimodal priors and remains frozen; the Pons Adapter integrates these cortical features with real-time proprioceptive inputs and compiles intent into execution-ready tokens; and the Cerebellum (ParaCAT) performs fast, parallel categorical decoding for online control, with hysteresis/EMA/temperature/entropy for stability. A fixed-ratio schedule and two-stage feature caching make the system compute-aware and reproducible. Inspired by active, foveated vision, our wrist ROIs are geometrically tied to the end-effector via calibrated projection, providing a movement-stabilized, high-resolution view that is sensitive to fine-grained pose changes and complements the global context of the main view. The design is modular: upgrading the Cerebrum only retrains the Pons; changing robots only trains the Cerebellum; cerebellum-only RL can further refine control without touching high-level semantics. As a concept-and-protocol paper with preliminary evidence, we outline a timing protocol under matched conditions (GPU, resolution, batch) to verify anticipated efficiency gains. We also report preliminary LIBERO evidence showing that split feature caching reduces training time (7.5h to 4.5h) and improves average success (86.5% to 92.5%) under official N1.5 head-only training, and that SaiVLA0 reaches 99.0% mean success.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
compute-aware
modular architecture
robotic control
neuroscience-inspired
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tripartite Architecture
Compute-Aware VLA
Modular Neuro-Inspired Control
Feature Caching
Foveated Wrist ROI
🔎 Similar Papers
No similar papers found.
X
Xiang Shi
Synthoid.ai
Wenlong Huang
Wenlong Huang
Stanford University
RoboticsMachine LearningFoundation Models
M
Menglin Zou
Synthoid.ai
X
Xinhai Sun
Synthoid.ai