LEO-MINI: An Efficient Multimodal Large Language Model using Conditional Token Reduction and Mixture of Multi-Modal Experts

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address visual token redundancy in multimodal large language models (MLLMs)—which impairs computational efficiency and weakens visual reasoning—this paper proposes an efficient, lightweight architecture. Our method introduces (1) Conditional Token Reduction (CoTR), a text-instruction-driven mechanism that dynamically compresses visual tokens while preserving semantic fidelity, and (2) Multimodal Mixture of Experts (MMoE), integrating input-aware dynamic routing, always-active general-purpose LoRA experts, and domain-specialized visual experts to enable cross-modal synergistic enhancement. The approach retains strong language understanding while significantly improving visual reasoning accuracy. Experiments demonstrate that our model outperforms state-of-the-art efficient MLLMs across multiple vision-language benchmarks: it achieves 35–52% faster inference speed and an average 4.8% gain in visual understanding accuracy—marking the first instance of concurrent optimization of both efficiency and capability in MLLMs.

Technology Category

Application Category

📝 Abstract
Redundancy of visual tokens in multi-modal large language models (MLLMs) significantly reduces their computational efficiency. Recent approaches, such as resamplers and summarizers, have sought to reduce the number of visual tokens, but at the cost of visual reasoning ability. To address this, we propose LEO-MINI, a novel MLLM that significantly reduces the number of visual tokens and simultaneously boosts visual reasoning capabilities. For efficiency, LEO-MINI incorporates CoTR, a novel token reduction module to consolidate a large number of visual tokens into a smaller set of tokens, using the similarity between visual tokens, text tokens, and a compact learnable query. For effectiveness, to scale up the model's ability with minimal computational overhead, LEO-MINI employs MMoE, a novel mixture of multi-modal experts module. MMOE employs a set of LoRA experts with a novel router to switch between them based on the input text and visual tokens instead of only using the input hidden state. MMoE also includes a general LoRA expert that is always activated to learn general knowledge for LLM reasoning. For extracting richer visual features, MMOE employs a set of vision experts trained on diverse domain-specific data. To demonstrate LEO-MINI's improved efficiency and performance, we evaluate it against existing efficient MLLMs on various benchmark vision-language tasks.
Problem

Research questions and friction points this paper is trying to address.

Reduces visual token redundancy in MLLMs for efficiency
Enhances visual reasoning without sacrificing computational performance
Integrates novel token reduction and multi-modal expert modules
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses CoTR for efficient visual token reduction
Employs MMoE with LoRA experts for scalability
Integrates diverse vision experts for richer features
🔎 Similar Papers
No similar papers found.