🤖 AI Summary
Evaluating long-context language models is computationally expensive due to the high cost of processing lengthy inputs.
Method: This paper introduces MiniLongBench, a lightweight benchmark derived from LongBench, featuring the first compression-and-pruning paradigm explicitly designed for long-text redundancy. It integrates empirically guided redundancy identification, task-aware sample pruning, cross-task balanced sampling, and multi-model consensus validation to distill 237 high-information-density instances.
Contribution/Results: MiniLongBench reduces evaluation cost to just 4.5% of LongBench’s while maintaining a 0.97 average rank correlation—ensuring both efficiency and fidelity. Experiments across 60+ large language models demonstrate substantial reductions in inference latency and computational overhead, with model rankings remaining highly consistent with those on LongBench. The benchmark—including code, data, and tutorials—is publicly released.
📝 Abstract
Long Context Understanding (LCU) is a critical area for exploration in current large language models (LLMs). However, due to the inherently lengthy nature of long-text data, existing LCU benchmarks for LLMs often result in prohibitively high evaluation costs, like testing time and inference expenses. Through extensive experimentation, we discover that existing LCU benchmarks exhibit significant redundancy, which means the inefficiency in evaluation. In this paper, we propose a concise data compression method tailored for long-text data with sparse information characteristics. By pruning the well-known LCU benchmark LongBench, we create MiniLongBench. This benchmark includes only 237 test samples across six major task categories and 21 distinct tasks. Through empirical analysis of over 60 LLMs, MiniLongBench achieves an average evaluation cost reduced to only 4.5% of the original while maintaining an average rank correlation coefficient of 0.97 with LongBench results. Therefore, our MiniLongBench, as a low-cost benchmark, holds great potential to substantially drive future research into the LCU capabilities of LLMs. See https://github.com/MilkThink-Lab/MiniLongBench for our code, data and tutorial.