🤖 AI Summary
Fixed-resolution image patching in multimodal large language model (MLLM) training on Ascend NPUs causes detail loss and layout distortion, hindering comprehension of complex charts and diagrams.
Method: We propose a native-resolution Vision Transformer (ViT) architecture coupled with a three-stage collaborative training paradigm—pretraining, multi-task alignment, and instruction fine-tuning—and develop Mindspeed-MLLM, a distributed training framework integrating multimodal data packing, hybrid parallelism, NPU operator equivalence substitution, test-time resolution search, and weight averaging.
Results: Using only ~10% of the training data required by Qwen2.5-VL, our model surpasses it on document understanding, chart parsing, and OCR benchmarks. End-to-end training efficiency is significantly improved, achieving—for the first time on the Ascend platform—high-fidelity, high-efficiency native-resolution MLLM training.
📝 Abstract
We propose MindVL, a multimodal large langauge model trained on Ascend NPUs. Similar to Qwen2.5-VL, MindVL adopts native-resolution Vision Transformers, which enables it to process images at their original variable resolutions. This design avoids the degradation caused by fixed-resolution tiling while preserving fine-grained details and global layouts, which is crucial for visually dense content such as complex charts and diagrams. To ensure the smooth training of MindVL on Ascend NPUs, we develop Mindspeed-MLLM, a distributed multimodal training framework tailored for Ascend NPUs. To maintain training accuracy, we implement equivalent replacements for certain operators. MindVL undergoes a three-phase training process, namely the warm-up phase, multitask training phase, and supervised instruction tuning phase, to gradually enhance its capabilities. This process starts with basic visual and multimodal pre-training, followed by large-scale multiask trainging and instruction tuning. We also adopt multimodal data packaging and hybrid parallelism techniques, which significantly improve end-to-end training speed. To further boost model performance, we specifically introduce test-time resolution search and model weight averaging. Notably, despite using about 1/10 of the training data required by Qwen2.5-VL, MindVL achieves performance on par with Qwen2.5-VL in evaluations of general multimodal understanding and document/table comprehension. Beyond overall scores, MindVL also delivers leading performance in OCR assessments.