Open-Source Multimodal Moxin Models with Moxin-VLM and Moxin-VLA

πŸ“… 2025-12-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current open-source large language models (LLMs) suffer from opaque training procedures and weak multimodal and Chinese-language capabilities, hindering ecosystem growth. To address these limitations, we introduce Moxinβ€”a fully open-stack multimodal LLM family. At its core lies Moxin-7B, a base model with publicly released weights, training data, source code, and complete training configurations. From this foundation, we derive specialized variants: Moxin-VLM for vision-language understanding, Moxin-VLA for vision-language-action decision-making, and a Chinese-optimized variant. Notably, Moxin-VLA introduces the first end-to-end reproducible open-source VLA architecture, integrating a ViT-based visual encoder, action tokenization, and multi-stage cross-modal alignment training. Evaluated on MMBench, OCRVQA, and ALFWorld, Moxin achieves performance competitive with comparable-scale proprietary models. All models, datasets, and training frameworks are publicly released under permissive licenses, fostering reproducible, collaborative multimodal AI research.

Technology Category

Application Category

πŸ“ Abstract
Recently, Large Language Models (LLMs) have undergone a significant transformation, marked by a rapid rise in both their popularity and capabilities. Leading this evolution are proprietary LLMs like GPT-4 and GPT-o1, which have captured widespread attention in the AI community due to their remarkable performance and versatility. Simultaneously, open-source LLMs, such as LLaMA and Mistral, have made great contributions to the ever-increasing popularity of LLMs due to the ease to customize and deploy the models across diverse applications. Moxin 7B is introduced as a fully open-source LLM developed in accordance with the Model Openness Framework, which moves beyond the simple sharing of model weights to embrace complete transparency in training, datasets, and implementation detail, thus fostering a more inclusive and collaborative research environment that can sustain a healthy open-source ecosystem. To further equip Moxin with various capabilities in different tasks, we develop three variants based on Moxin, including Moxin-VLM, Moxin-VLA, and Moxin-Chinese, which target the vision-language, vision-language-action, and Chinese capabilities, respectively. Experiments show that our models achieve superior performance in various evaluations. We adopt open-source framework and open data for the training. We release our models, along with the available data and code to derive these models.
Problem

Research questions and friction points this paper is trying to address.

Develop open-source multimodal models for vision-language tasks
Enhance LLMs with vision-language-action and Chinese capabilities
Promote transparency in training, datasets, and implementation details
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source multimodal models with vision-language-action capabilities
Complete transparency in training, datasets, and implementation details
Superior performance using open-source frameworks and open data
πŸ”Ž Similar Papers
No similar papers found.
P
Pu Zhao
Northeastern University
Xuan Shen
Xuan Shen
Cornell Tech, Northeastern University
Efficient Deep LearningML SystemsAutoML
Zhenglun Kong
Zhenglun Kong
Harvard University
Efficient Deep LearningLarge Language ModelAI4Science
Yixin Shen
Yixin Shen
Inria Rennes
Quantum AlgorithmsCryptography
Sung-En Chang
Sung-En Chang
Northeastern
model compressionmachine learningdeep learningquantizationefficient training
A
Arash Akbari
Northeastern University
T
Timothy Rupprecht
Northeastern University
L
Lei Lu
Northeastern University
E
Enfu Nan
Northeastern University
Changdi Yang
Changdi Yang
PhD candidate, Northeastern University, Snap Inc.
Efficient Deep Learning
Y
Yumei He
Tulane University
W
Weiyan Shi
Northeastern University
X
Xingchen Xu
University of Washington
Y
Yu Huang
Roboraction.ai
W
Wei Jiang
Futurewei
W
Wei Wang
Futurewei
Y
Yue Chen
Futurewei
Y
Yong He
Futurewei
Y
Yanzhi Wang
Northeastern University, AIBAO LLC