LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing long-context language models lack rigorous evaluation on complex procedural tasks—such as multi-step reasoning, dispersed information integration, and long structured generation—due to the absence of appropriate benchmarks. Method: We introduce LongProc, the first verifiable benchmark specifically designed for long-range procedural generation. It comprises six realistic procedural tasks (e.g., HTML-to-TSV extraction, travel planning), supports three output-length tiers (500/2K/8K tokens), and employs rule-based automated evaluation. Unlike conventional recall-oriented benchmarks, LongProc uniquely emphasizes dispersed information integration, deterministic structured output, and multi-step logical coherence. Contribution/Results: Extensive evaluation across 17 state-of-the-art long-context models reveals critical limitations: open-source models degrade significantly even on 2K-token tasks, while proprietary models like GPT-4o exhibit severe long-range coherence collapse at 8K tokens. LongProc establishes a new paradigm and a reliable, task-grounded benchmark for assessing procedural generation capability in long-context LMs.

Technology Category

Application Category

📝 Abstract
Existing benchmarks for evaluating long-context language models (LCLMs) primarily focus on long-context recall, requiring models to produce short responses based on a few critical snippets while processing thousands of irrelevant tokens. We introduce LongProc (Long Procedural Generation), a new benchmark that requires both the integration of highly dispersed information and long-form generation. LongProc consists of six diverse procedural generation tasks, such as extracting structured information from HTML pages into a TSV format and executing complex search procedures to create travel plans. These tasks challenge LCLMs by testing their ability to follow detailed procedural instructions, synthesize and reason over dispersed information, and generate structured, long-form outputs (up to 8K tokens). Furthermore, as these tasks adhere to deterministic procedures and yield structured outputs, they enable reliable rule-based evaluation. We evaluate 17 LCLMs on LongProc across three difficulty levels, with maximum numbers of output tokens set at 500, 2K, and 8K. Notably, while all tested models claim a context window size above 32K tokens, open-weight models typically falter on 2K-token tasks, and closed-source models like GPT-4o show significant degradation on 8K-token tasks. Further analysis reveals that LCLMs struggle to maintain long-range coherence in long-form generations. These findings highlight critical limitations in current LCLMs and suggest substantial room for improvement. Data and code available at: https://princeton-pli.github.io/LongProc
Problem

Research questions and friction points this paper is trying to address.

Language Models
Coherent Text Generation
Complex Information Integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

LongProc
Long Text Generation
Model Coherence Evaluation
🔎 Similar Papers