LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks

📅 2024-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing long-context evaluation benchmarks lack rigorous assessment of multi-task deep understanding across realistic scenarios. Method: LongBench v2 introduces the first comprehensive benchmark covering six diverse tasks—single/multi-document QA, long-dialogue understanding, codebase analysis, and structured-data reasoning—with 503 high-difficulty multiple-choice questions and context lengths spanning 8K–2M tokens. It employs a human-AI co-creation pipeline and dual human-machine verification to ensure quality. The benchmark further proposes a novel long-context multi-task evaluation framework and explicit reasoning-path modeling (e.g., o1-preview). Contribution/Results: Human experts achieve only 53.7% accuracy under time constraints; the strongest baseline model attains 50.1% via direct answering, while o1-preview achieves 57.7%, surpassing the human baseline for the first time—demonstrating that test-time compute scaling is critical for deep long-text understanding.

Technology Category

Application Category

📝 Abstract
This paper introduces LongBench v2, a benchmark designed to assess the ability of LLMs to handle long-context problems requiring deep understanding and reasoning across real-world multitasks. LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repository understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2. The project is available at https://longbench2.github.io.
Problem

Research questions and friction points this paper is trying to address.

LongBench v2
Long Text Processing
Complex Problem Solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

LongBench v2
deep reasoning
long-form content understanding
🔎 Similar Papers
No similar papers found.