🤖 AI Summary
To address the challenge of balancing inference efficiency and capability in multimodal large language models (MLLMs), this paper proposes RoE—a dynamic expert routing mechanism that requires no architectural modification. RoE employs sample-dependent dynamic gating to adaptively select expert paths and introduces structural sparsity regularization to encourage shortcut inference. Crucially, it unifies routing strategies across training and inference phases for the first time in MLLMs, ensuring routing consistency. Experiments on LLaVA-1.5, LLaVA-HR, and VILA demonstrate that RoE achieves an average 3.3% performance gain across five vision-language benchmarks while outperforming MoE-LLaVA in inference speed. The core contributions are: (1) a zero-structural-modification dynamic routing paradigm; (2) structural sparsity–driven efficient inference; and (3) a cross-phase routing alignment mechanism.
📝 Abstract
Recently, mixture of experts (MoE) has become a popular paradigm for achieving the trade-off between modal capacity and efficiency of multi-modal large language models (MLLMs). Different from previous efforts, we are dedicated to exploring the dynamic expert path in an already exist MLLM and show that a standard MLLM can be also a mixture of experts. To approach this target, we propose a novel dynamic expert scheme for MLLMs, termed Routing Experts (RoE), which can achieve example-dependent optimal path routing without obvious structure tweaks. Meanwhile, a new regularization of structure sparsity is also introduced to enforce MLLMs to learn more short-cut inference, ensuring the efficiency. In addition, we also realize the first attempt of aligning the training and inference schemes of MLLMs in terms of network routing. To validate RoE, we apply it to a set of latest MLLMs, including LLaVA-1.5, LLaVA-HR and VILA, and conduct extensive experiments on a bunch of VL benchmarks. The experiment results not only show the great advantages of our RoE in improving MLLMs' efficiency, but also yield obvious advantages than MoE-LLaVA in both performance and speed, e.g., an average performance gain of 3.3% on 5 benchmarks while being faster.