🤖 AI Summary
Current multimodal large language models (MLLMs) struggle to achieve both data and computational efficiency in vision-language tasks: self-attention incurs high computational overhead, while cross-attention suffers from limited generalization. To address this, we propose Composite Attention—a novel attention mechanism that eliminates self-attention among visual tokens, reuses pretrained LLM layer weights for cross-modal alignment, incorporates sparse visual token interaction modeling, and adopts a training-free fine-tuning paradigm. Furthermore, we design EE-MLLM-F, a lightweight, zero-shot variant that bypasses conventional attention computation bottlenecks. Our approach achieves state-of-the-art performance on MMbench, SeedBench, and TextVQA—outperforming Flamingo. On H800 hardware, prefill latency is reduced to 79 ms, a 71% improvement over LLaVA. Notably, it attains strong generalization with only limited training data.
📝 Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated satisfactory performance across various vision-language tasks. Current approaches for vision and language interaction fall into two categories: self-attention-based and cross-attention-based methods. However, both approaches present inherent limitations, forcing a trade-off between data and computational efficiency. To address this issue, we introduce the Data-$ extbf{E}$fficient and Compute-$ extbf{E}$fficient $ extbf{MLLM}$ ($ extbf{EE-MLLM}$). Specifically, we modify the original self-attention mechanism in MLLM to a composite attention mechanism. This mechanism has two key characteristics: 1) eliminating the computational overhead of self-attention among visual tokens to achieve $ extbf{compute efficiency}$, and 2) reusing the weights from each layer of LLM to facilitate effective vision-language modality alignment for $ extbf{data efficiency}$. As a result, EE-MLLM significantly outperforms Flamingo with limited training data, and reduces the prefilling time to 79 ms on an H800 GPU, compared to LLaVA's 277 ms. To further investigate the efficiency of EE-MLLM, we present a training-free variant named EE-MLLM-F, which reduces the computation cost of self-attention-based method without additional training. Experimental results demonstrate the effectiveness of EE-MLLM across a range of benchmarks, including general-purpose datasets like MMBench and SeedBench, as well as fine-grained tasks such as TextVQA and DocVQA.