π€ AI Summary
Existing evaluation methods for visionβtext compression rely excessively on downstream task performance, which often fails to accurately reflect textual fidelity. To address this limitation, this work proposes a decoupled evaluation framework that isolates the capabilities of multimodal large language models and introduces ZeroSense, a novel benchmark designed to eliminate contextual dependencies by leveraging samples with low semantic relevance. This enables a pristine measurement of compression quality independent of task-specific biases. Extensive experiments across multiple datasets reveal a significant discrepancy between compression quality and downstream task accuracy, thereby demonstrating the necessity and effectiveness of the proposed framework for objectively assessing compression fidelity.
π Abstract
Recent visual-text compression (VTC) methods, typified by DeepSeek-OCR, report impressive high token compression ratios for long-context modeling tasks by leveraging text-to-image rendering. However, existing evaluation protocols heavily rely on downstream task performance. Such evaluation metrics fail to accurately measure text preservation due to the strong inherent linguistic priors of Multimodal Large Language Models (MLLMs). In this work, we introduce a new evaluation framework that decouples MLLMs' capabilities to faithfully assess VTC quality. Within this framework, we further introduce the ZeroSense Benchmark to ensure low semantic correlation of testing samples. By eliminating contextual dependencies, our benchmark guarantees that the evaluation results are purely reflective of VTC quality, unaffected by the semantic inference capabilities of downstream models. Extensive experiments across multiple datasets demonstrate that VTC quality and downstream task accuracy diverge significantly, highlighting the necessity of our decoupled evaluation framework.