🤖 AI Summary
To address excessive computational overhead caused by visual token redundancy in multi-image multimodal large language models (LMMs), this paper proposes an encoder-agnostic, training-free visual token fusion method. The approach employs cosine similarity to perform hierarchical, iterative clustering and fusion across images, patches, and models, dynamically merging redundant tokens to substantially compress input sequence length. It requires no modifications to the visual encoder architecture or fine-tuning of the language model, ensuring compatibility with arbitrary Vision Transformer (ViT)-based encoders and mainstream LMMs—including LLaVA and Qwen-VL—and supporting high-resolution, interleaved multi-image inputs. Evaluated on LLaVA-Interleave Bench and a custom ComPairs benchmark, the method reduces visual tokens by 40% on average, accelerates inference by 2.1×, and maintains—or even improves—multi-image comprehension accuracy.
📝 Abstract
Large Multimodal Models (LMMs) are powerful tools that are capable of reasoning and understanding multimodal information beyond text and language. Despite their entrenched impact, the development of LMMs is hindered by the higher computational requirements compared to their unimodal counterparts. One of the main causes of this is the large amount of tokens needed to encode the visual input, which is especially evident for multi-image multimodal tasks. Recent approaches to reduce visual tokens depend on the visual encoder architecture, require fine-tuning the LLM to maintain the performance, and only consider single-image scenarios. To address these limitations, we propose ToFu, a visual encoder-agnostic, training-free Token Fusion strategy that combines redundant visual tokens of LMMs for high-resolution, multi-image, tasks. The core intuition behind our method is straightforward yet effective: preserve distinctive tokens while combining similar ones. We achieve this by sequentially examining visual tokens and deciding whether to merge them with others or keep them as separate entities. We validate our approach on the well-established LLaVA-Interleave Bench, which covers challenging multi-image tasks. In addition, we push to the extreme our method by testing it on a newly-created benchmark, ComPairs, focused on multi-image comparisons where a larger amount of images and visual tokens are inputted to the LMMs. Our extensive analysis, considering several LMM architectures, demonstrates the benefits of our approach both in terms of efficiency and performance gain.