LOOM-Scope: a comprehensive and efficient LOng-cOntext Model evaluation framework

πŸ“… 2025-07-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Long-context model evaluation faces two key bottlenecks: fragmented benchmarking protocols hinder cross-study comparability, while high computational costs impede large-scale assessment. To address these, we propose LCEvalβ€”the first lightweight, efficient framework enabling unified multi-benchmark evaluation. Our method introduces (1) a standardized evaluation protocol that harmonizes task partitioning, input construction, and metric computation across major long-context benchmarks (e.g., LongBench, LEMB); (2) integrated KV-cache compression and chunked inference acceleration, reducing GPU memory usage and latency by over 60%; and (3) a modular, extensible benchmark suite balancing comprehensiveness with low overhead. Experiments demonstrate that LCEval significantly improves evaluation consistency, reproducibility, and community accessibility. The framework is open-sourced and has been adopted by multiple leading model evaluation initiatives.

Technology Category

Application Category

πŸ“ Abstract
Long-context processing has become a fundamental capability for large language models~(LLMs). To assess model's long-context performance, numerous long-context evaluation benchmarks have been proposed. However, variations in evaluation settings across these benchmarks lead to inconsistent results, making it difficult to draw reliable comparisons. Besides, the high computational cost of long-context evaluation poses a significant barrier for the community to conduct comprehensive assessments of long-context models. In this paper, we propose LOOM-Scope, a comprehensive and efficient framework for long-context evaluation. LOOM-Scope standardizes evaluation settings across diverse benchmarks, supports deployment of efficient long-context inference acceleration methods, and introduces a holistic yet lightweight benchmark suite to evaluate models comprehensively. Homepage: https://loomscope.github.io
Problem

Research questions and friction points this paper is trying to address.

Standardizes evaluation settings for consistent long-context model comparisons
Reduces high computational costs in long-context model assessments
Provides a holistic yet lightweight benchmark suite for comprehensive evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardizes evaluation settings across benchmarks
Supports efficient inference acceleration methods
Introduces lightweight comprehensive benchmark suite
πŸ”Ž Similar Papers
No similar papers found.
Z
Zecheng Tang
Soochow University, China; Key Laboratory of Data Intelligence and Advanced Computing, Soochow University
Haitian Wang
Haitian Wang
University of Western Australia
3D point cloudComputer visionMachine leaningIoTRemote sensing
Quantong Qiu
Quantong Qiu
Soochow University
LLMSparse AttentionKV Cache
B
Baibei Ji
Soochow University, China; Key Laboratory of Data Intelligence and Advanced Computing, Soochow University
R
Ruoxi Sun
Soochow University, China; Key Laboratory of Data Intelligence and Advanced Computing, Soochow University
K
Keyan Zhou
Soochow University, China; Key Laboratory of Data Intelligence and Advanced Computing, Soochow University
Juntao Li
Juntao Li
Soochow University
Language ModelsText Generation
M
Min Zhang
Soochow University, China; Key Laboratory of Data Intelligence and Advanced Computing, Soochow University