FOLDER: Accelerating Multi-modal Large Language Models with Enhanced Performance

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead, memory bottlenecks, and inference latency induced by long visual token sequences in multimodal large language models (MLLMs), this work proposes a plug-and-play lightweight visual token compression module. We first systematically quantify the trade-off between visual token redundancy and information loss. Building on this analysis, we design an attention-aware token clustering and weighted fusion mechanism, dynamic redundancy filtering, and an end-to-end differentiable token reduction scheme—fully compatible with arbitrary vision backbones without architectural modification. Crucially, our method requires no model fine-tuning. It achieves up to 70% visual token compression while preserving—or even surpassing—original model performance on benchmarks including MMBench and OCRBench. Inference speed improves by 2.1×–3.8×, demonstrating significant efficiency gains without sacrificing multimodal understanding capability.

Technology Category

Application Category

📝 Abstract
Recently, Multi-modal Large Language Models (MLLMs) have shown remarkable effectiveness for multi-modal tasks due to their abilities to generate and understand cross-modal data. However, processing long sequences of visual tokens extracted from visual backbones poses a challenge for deployment in real-time applications. To address this issue, we introduce FOLDER, a simple yet effective plug-and-play module designed to reduce the length of the visual token sequence, mitigating both computational and memory demands during training and inference. Through a comprehensive analysis of the token reduction process, we analyze the information loss introduced by different reduction strategies and develop FOLDER to preserve key information while removing visual redundancy. We showcase the effectiveness of FOLDER by integrating it into the visual backbone of several MLLMs, significantly accelerating the inference phase. Furthermore, we evaluate its utility as a training accelerator or even performance booster for MLLMs. In both contexts, FOLDER achieves comparable or even better performance than the original models, while dramatically reducing complexity by removing up to 70% of visual tokens.
Problem

Research questions and friction points this paper is trying to address.

Reducing visual token sequence length for MLLMs
Mitigating computational and memory demands in deployment
Preserving key information while removing visual redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play module reducing visual token length
Preserves key information while removing visual redundancy
Accelerates inference and training with token reduction
🔎 Similar Papers
H
Haicheng Wang
SJTU Paris Elite Insitute of Technology, Shanghai Jiao Tong University, China; LTCI, Télém Paris, Institut Polytechnique de Paris, France
Zhemeng Yu
Zhemeng Yu
Shanghai Jiao Tong University
Data MiningDeep Learning
Gabriele Spadaro
Gabriele Spadaro
PhD student, University of Turin, Télécom Paris
Deep LearningComputer Vision
V
Victor Qu'etu
LTCI, Télém Paris, Institut Polytechnique de Paris, France
Enzo Tartaglione
Enzo Tartaglione
Associate Professor, Télécom Paris, Institut Polytechnique de Paris
deep learningcompressionpruningdebiasingfrugal AI