🤖 AI Summary
Embodied intelligence faces challenges in environment perception and adaptive decision-making. Method: This work introduces an open-source multimodal “foundation brain model” architecture, scalable from 7B to 72B parameters and deployable across diverse physical embodiments. To address perceptual and behavioral adaptation, we propose DPPO (Deep-Reflection Proficient Policy Optimization), a meta-cyclic training framework integrating Reinforcement Learning, Refinement, Diagnostic Feedback, and Supervised Fine-Tuning—mimicking human metacognition for efficient deliberate practice. Trained on >4B high-quality tokens using an A800 cluster, the method closes the RL loop with diagnostic feedback and SFT. Contribution/Results: Our approach achieves a 20.3% improvement over base models, outperforms open-source models exceeding 100B parameters by 10.6%, and attains state-of-the-art performance on major embodied AI benchmarks—matching or exceeding closed-source SOTA systems.
📝 Abstract
This report presents Pelican-VL 1.0, a new family of open-source embodied brain models with parameter scales ranging from 7 billion to 72 billion. Our explicit mission is clearly stated as: To embed powerful intelligence into various embodiments. Pelican-VL 1.0 is currently the largest-scale open-source embodied multimodal brain model. Its core advantage lies in the in-depth integration of data power and intelligent adaptive learning mechanisms. Specifically, metaloop distilled a high-quality dataset from a raw dataset containing 4+ billion tokens. Pelican-VL 1.0 is trained on a large-scale cluster of 1000+ A800 GPUs, consuming over 50k+ A800 GPU-hours per checkpoint. This translates to a 20.3% performance uplift from its base model and outperforms 100B-level open-source counterparts by 10.6%, placing it on par with leading proprietary systems on well-known embodied benchmarks. We establish a novel framework, DPPO (Deliberate Practice Policy Optimization), inspired by human metacognition to train Pelican-VL 1.0. We operationalize this as a metaloop that teaches the AI to practice deliberately, which is a RL-Refine-Diagnose-SFT loop.