π€ AI Summary
Long-context model evaluation faces two key bottlenecks: fragmented benchmarking protocols hinder cross-study comparability, while high computational costs impede large-scale assessment. To address these, we propose LCEvalβthe first lightweight, efficient framework enabling unified multi-benchmark evaluation. Our method introduces (1) a standardized evaluation protocol that harmonizes task partitioning, input construction, and metric computation across major long-context benchmarks (e.g., LongBench, LEMB); (2) integrated KV-cache compression and chunked inference acceleration, reducing GPU memory usage and latency by over 60%; and (3) a modular, extensible benchmark suite balancing comprehensiveness with low overhead. Experiments demonstrate that LCEval significantly improves evaluation consistency, reproducibility, and community accessibility. The framework is open-sourced and has been adopted by multiple leading model evaluation initiatives.
π Abstract
Long-context processing has become a fundamental capability for large language models~(LLMs). To assess model's long-context performance, numerous long-context evaluation benchmarks have been proposed. However, variations in evaluation settings across these benchmarks lead to inconsistent results, making it difficult to draw reliable comparisons. Besides, the high computational cost of long-context evaluation poses a significant barrier for the community to conduct comprehensive assessments of long-context models. In this paper, we propose LOOM-Scope, a comprehensive and efficient framework for long-context evaluation. LOOM-Scope standardizes evaluation settings across diverse benchmarks, supports deployment of efficient long-context inference acceleration methods, and introduces a holistic yet lightweight benchmark suite to evaluate models comprehensively. Homepage: https://loomscope.github.io