Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation benchmarks for vision token compression in multimodal large language models (MLLMs) suffer from task mismatch: originally designed to assess perceptual and reasoning capabilities, they are ill-suited for evaluating compression efficacy, leading to distorted results. This work identifies pervasive noisy samples in these benchmarks and—crucially—first demonstrates that image downsampling serves as an effective proxy for difficulty estimation and data filtering. Building on this insight, we propose VTC-Bench, a dedicated evaluation framework for vision token compression that enhances fairness and accuracy by systematically denoising existing benchmarks. Extensive experiments across multiple mainstream benchmarks and compression methods reveal substantial noise contamination in current evaluations: simple downsampling even outperforms state-of-the-art compression techniques. In contrast, VTC-Bench significantly improves assessment reliability. Our code and data are publicly available.

Technology Category

Application Category

📝 Abstract
Recent endeavors to accelerate inference in Multimodal Large Language Models (MLLMs) have primarily focused on visual token compression. The effectiveness of these methods is typically assessed by measuring the accuracy drop on established benchmarks, comparing model performance before and after compression. However, these benchmarks are originally designed to assess the perception and reasoning capabilities of MLLMs, rather than to evaluate compression techniques. As a result, directly applying them to visual token compression introduces a task mismatch. Strikingly, our investigation reveals that simple image downsampling consistently outperforms many advanced compression methods across multiple widely used benchmarks. Through extensive experiments, we make the following observations: (i) Current benchmarks are noisy for the visual token compression task. (ii) Down-sampling is able to serve as a data filter to evaluate the difficulty of samples in the visual token compression task. Motivated by these findings, we introduce VTC-Bench, an evaluation framework that incorporates a data filtering mechanism to denoise existing benchmarks, thereby enabling fairer and more accurate assessment of visual token compression methods. All data and code are available at https://github.com/Chenfei-Liao/VTC-Bench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating visual token compression methods using inappropriate benchmarks
Addressing task mismatch in multimodal model compression assessment
Developing framework for fair comparison of compression techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes VTC-Bench evaluation framework for compression methods
Introduces data filtering mechanism to denoise benchmarks
Uses image downsampling as baseline for performance assessment
🔎 Similar Papers
No similar papers found.