ZeroSense:How Vision matters in Long Context Compression

πŸ“… 2026-03-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing evaluation methods for vision–text compression rely excessively on downstream task performance, which often fails to accurately reflect textual fidelity. To address this limitation, this work proposes a decoupled evaluation framework that isolates the capabilities of multimodal large language models and introduces ZeroSense, a novel benchmark designed to eliminate contextual dependencies by leveraging samples with low semantic relevance. This enables a pristine measurement of compression quality independent of task-specific biases. Extensive experiments across multiple datasets reveal a significant discrepancy between compression quality and downstream task accuracy, thereby demonstrating the necessity and effectiveness of the proposed framework for objectively assessing compression fidelity.

Technology Category

Application Category

πŸ“ Abstract
Recent visual-text compression (VTC) methods, typified by DeepSeek-OCR, report impressive high token compression ratios for long-context modeling tasks by leveraging text-to-image rendering. However, existing evaluation protocols heavily rely on downstream task performance. Such evaluation metrics fail to accurately measure text preservation due to the strong inherent linguistic priors of Multimodal Large Language Models (MLLMs). In this work, we introduce a new evaluation framework that decouples MLLMs' capabilities to faithfully assess VTC quality. Within this framework, we further introduce the ZeroSense Benchmark to ensure low semantic correlation of testing samples. By eliminating contextual dependencies, our benchmark guarantees that the evaluation results are purely reflective of VTC quality, unaffected by the semantic inference capabilities of downstream models. Extensive experiments across multiple datasets demonstrate that VTC quality and downstream task accuracy diverge significantly, highlighting the necessity of our decoupled evaluation framework.
Problem

Research questions and friction points this paper is trying to address.

visual-text compression
evaluation framework
text preservation
multimodal large language models
token compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual-text compression
evaluation framework
ZeroSense Benchmark
multimodal LLMs
text preservation
πŸ”Ž Similar Papers
No similar papers found.
Y
Yonghan Gao
Shenzhen University of Advanced Technology
Z
Zehong Chen
Shenzhen University of Advanced Technology
L
Lijian Xu
Shenzhen University of Advanced Technology
J
Jingzhi Chen
Shenzhen University of Advanced Technology
J
Jingwei Guan
Shenzhen Technology University
Xingyu Zeng
Xingyu Zeng
Shenzhen University of Advanced Technology
Computer VisionDeep Learning