π€ AI Summary
Current open-source large language models (LLMs) suffer from opaque training procedures and weak multimodal and Chinese-language capabilities, hindering ecosystem growth. To address these limitations, we introduce Moxinβa fully open-stack multimodal LLM family. At its core lies Moxin-7B, a base model with publicly released weights, training data, source code, and complete training configurations. From this foundation, we derive specialized variants: Moxin-VLM for vision-language understanding, Moxin-VLA for vision-language-action decision-making, and a Chinese-optimized variant. Notably, Moxin-VLA introduces the first end-to-end reproducible open-source VLA architecture, integrating a ViT-based visual encoder, action tokenization, and multi-stage cross-modal alignment training. Evaluated on MMBench, OCRVQA, and ALFWorld, Moxin achieves performance competitive with comparable-scale proprietary models. All models, datasets, and training frameworks are publicly released under permissive licenses, fostering reproducible, collaborative multimodal AI research.
π Abstract
Recently, Large Language Models (LLMs) have undergone a significant transformation, marked by a rapid rise in both their popularity and capabilities. Leading this evolution are proprietary LLMs like GPT-4 and GPT-o1, which have captured widespread attention in the AI community due to their remarkable performance and versatility. Simultaneously, open-source LLMs, such as LLaMA and Mistral, have made great contributions to the ever-increasing popularity of LLMs due to the ease to customize and deploy the models across diverse applications. Moxin 7B is introduced as a fully open-source LLM developed in accordance with the Model Openness Framework, which moves beyond the simple sharing of model weights to embrace complete transparency in training, datasets, and implementation detail, thus fostering a more inclusive and collaborative research environment that can sustain a healthy open-source ecosystem. To further equip Moxin with various capabilities in different tasks, we develop three variants based on Moxin, including Moxin-VLM, Moxin-VLA, and Moxin-Chinese, which target the vision-language, vision-language-action, and Chinese capabilities, respectively. Experiments show that our models achieve superior performance in various evaluations. We adopt open-source framework and open data for the training. We release our models, along with the available data and code to derive these models.