AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the deployment challenges of mainstream multimodal large language models (MLLMs)—such as Qwen-VL and GPT-4o—on resource-constrained edge devices (e.g., smartphones) due to limited memory, power, and computational capacity, this paper introduces AndesVL, a lightweight, on-device MLLM architecture with 0.6B–4B parameters. Methodologically, AndesVL adopts an end-to-end training framework integrating the Qwen3 language model with multiple vision encoders, trained on large-scale multitask data and enhanced by an innovative 1+N LoRA fine-tuning strategy to enable efficient training and synergistic capability expansion. Evaluated across diverse benchmarks—including text-rich image understanding, mathematical reasoning, multi-image comprehension, visual question answering (VQA), hallucination mitigation, multilingual understanding, and GUI interpretation—AndesVL achieves state-of-the-art performance at comparable parameter counts, significantly advancing practical multimodal understanding capabilities on edge devices.

Technology Category

Application Category

📝 Abstract
In recent years, while cloud-based MLLMs such as QwenVL, InternVL, GPT-4o, Gemini, and Claude Sonnet have demonstrated outstanding performance with enormous model sizes reaching hundreds of billions of parameters, they significantly surpass the limitations in memory, power consumption, and computing capacity of edge devices such as mobile phones. This paper introduces AndesVL, a suite of mobile-side MLLMs with 0.6B to 4B parameters based on Qwen3's LLM and various visual encoders. We comprehensively outline the model architectures, training pipeline, and training data of AndesVL, which achieves first-tier performance across a wide range of open-source benchmarks, including fields such as text-rich image understanding, reasoning and math, multi-image comprehension, general VQA, hallucination mitigation, multilingual understanding, and GUI-related tasks when compared with state-of-the-art models of a similar scale. Furthermore, we introduce a 1+N LoR
Problem

Research questions and friction points this paper is trying to address.

Developing efficient multimodal models for mobile devices
Overcoming memory and computational constraints on edge devices
Achieving competitive performance across diverse vision-language tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mobile-side multimodal model with 0.6B-4B parameters
Based on Qwen3 LLM and various visual encoders
Achieves top performance across multiple benchmark categories
🔎 Similar Papers
No similar papers found.
Z
Zhiwei Jin
AndesVL Team, OPPO AI Center
X
Xiaohui Song
AndesVL Team, OPPO AI Center
N
Nan Wang
AndesVL Team, OPPO AI Center
Yafei Liu
Yafei Liu
Southwest Jiaotong University
RailwayAutomatic Train OperationOptimal ControlModel Predictive Control
C
Chao Li
AndesVL Team, OPPO AI Center
X
Xin Li
AndesVL Team, OPPO AI Center
Ruichen Wang
Ruichen Wang
University of Maryland, College Park
Wireless CommunicationmmWave CommunicationPropagation modeling
Z
Zhihao Li
AndesVL Team, OPPO AI Center
Q
Qi Qi
AndesVL Team, OPPO AI Center
L
Long Cheng
AndesVL Team, OPPO AI Center
D
Dongze Hao
AndesVL Team, OPPO AI Center
Q
Quanlong Zheng
AndesVL Team, OPPO AI Center
Y
Yanhao Zhang
AndesVL Team, OPPO AI Center
Haobo Ji
Haobo Ji
AndesVL Team, OPPO AI Center
J
Jian Ma
AndesVL Team, OPPO AI Center
Z
Zhitong Zheng
AndesVL Team, OPPO AI Center
Z
Zhenyi Lin
AndesVL Team, OPPO AI Center
H
Haolin Deng
AndesVL Team, OPPO AI Center
X
Xin Zou
AndesVL Team, OPPO AI Center
X
Xiaojie Yin
AndesVL Team, OPPO AI Center
R
Ruilin Wang
AndesVL Team, OPPO AI Center
L
Liankai Cai
AndesVL Team, OPPO AI Center
H
Haijing Liu
AndesVL Team, OPPO AI Center
Y
Yuqing Qiu
AndesVL Team, OPPO AI Center
K
Ke Chen
AndesVL Team, OPPO AI Center