MindVL: Towards Efficient and Effective Training of Multimodal Large Language Models on Ascend NPUs

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fixed-resolution image patching in multimodal large language model (MLLM) training on Ascend NPUs causes detail loss and layout distortion, hindering comprehension of complex charts and diagrams. Method: We propose a native-resolution Vision Transformer (ViT) architecture coupled with a three-stage collaborative training paradigm—pretraining, multi-task alignment, and instruction fine-tuning—and develop Mindspeed-MLLM, a distributed training framework integrating multimodal data packing, hybrid parallelism, NPU operator equivalence substitution, test-time resolution search, and weight averaging. Results: Using only ~10% of the training data required by Qwen2.5-VL, our model surpasses it on document understanding, chart parsing, and OCR benchmarks. End-to-end training efficiency is significantly improved, achieving—for the first time on the Ascend platform—high-fidelity, high-efficiency native-resolution MLLM training.

Technology Category

Application Category

📝 Abstract
We propose MindVL, a multimodal large langauge model trained on Ascend NPUs. Similar to Qwen2.5-VL, MindVL adopts native-resolution Vision Transformers, which enables it to process images at their original variable resolutions. This design avoids the degradation caused by fixed-resolution tiling while preserving fine-grained details and global layouts, which is crucial for visually dense content such as complex charts and diagrams. To ensure the smooth training of MindVL on Ascend NPUs, we develop Mindspeed-MLLM, a distributed multimodal training framework tailored for Ascend NPUs. To maintain training accuracy, we implement equivalent replacements for certain operators. MindVL undergoes a three-phase training process, namely the warm-up phase, multitask training phase, and supervised instruction tuning phase, to gradually enhance its capabilities. This process starts with basic visual and multimodal pre-training, followed by large-scale multiask trainging and instruction tuning. We also adopt multimodal data packaging and hybrid parallelism techniques, which significantly improve end-to-end training speed. To further boost model performance, we specifically introduce test-time resolution search and model weight averaging. Notably, despite using about 1/10 of the training data required by Qwen2.5-VL, MindVL achieves performance on par with Qwen2.5-VL in evaluations of general multimodal understanding and document/table comprehension. Beyond overall scores, MindVL also delivers leading performance in OCR assessments.
Problem

Research questions and friction points this paper is trying to address.

Efficient multimodal training on Ascend NPUs
Processing variable-resolution images without degradation
Achieving competitive performance with reduced training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Native-resolution Vision Transformers for original variable resolutions
Distributed training framework with operator replacements for NPUs
Multimodal data packaging and hybrid parallelism techniques
Feilong Chen
Feilong Chen
Huawei Inc.; Previously CASIA
(Native) Multimodal LLMMultimodal GenerationMultimodal ReasoningOmni-modal LLM
Yijiang Liu
Yijiang Liu
PhD
Machine Learning Efficiency
Y
Yi Huang
Huawei Technologies Co., Ltd.
H
Hao Wang
Huawei Technologies Co., Ltd.
M
Miren Tian
Huawei Technologies Co., Ltd.
Ya-Qi Yu
Ya-Qi Yu
Huawei Technologies Co., Ltd.
M
Minghui Liao
Huawei Technologies Co., Ltd.
Jihao Wu
Jihao Wu
Huawei Inc.
Computer VisionMulti-Modality