Too Long, Didn't Model: Decomposing LLM Long-Context Understanding With Novels

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing long-context evaluation relies heavily on人工ly constructed “needle-in-a-haystack” tasks, failing to capture realistic, complex semantic dependencies. Method: This work introduces TLDM, the first benchmark systematically employing full-length novels exceeding 128K tokens as test cases; it proposes a three-dimensional evaluation framework covering plot summarization, story-world modeling, and narrative temporality, grounded in computational literary analysis and implemented via a multi-task assessment protocol. Contribution/Results: We publicly release the dataset, reference implementations, and standardized evaluation pipelines. Experiments across seven state-of-the-art LLMs reveal a pronounced performance degradation beyond 64K tokens—not only confirming the “middle-loss” phenomenon but also uncovering deeper long-range comprehension failures: specifically, inadequate modeling of cross-chapter semantic coherence and nonlinear temporal structures.

Technology Category

Application Category

📝 Abstract
Although the context length of large language models (LLMs) has increased to millions of tokens, evaluating their effectiveness beyond needle-in-a-haystack approaches has proven difficult. We argue that novels provide a case study of subtle, complicated structure and long-range semantic dependencies often over 128k tokens in length. Inspired by work on computational novel analysis, we release the Too Long, Didn't Model (TLDM) benchmark, which tests a model's ability to report plot summary, storyworld configuration, and elapsed narrative time. We find that none of seven tested frontier LLMs retain stable understanding beyond 64k tokens. Our results suggest language model developers must look beyond"lost in the middle"benchmarks when evaluating model performance in complex long-context scenarios. To aid in further development we release the TLDM benchmark together with reference code and data.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' long-context understanding beyond needle-in-haystack approaches
Assessing LLMs' ability to handle novels' long-range semantic dependencies
Testing LLMs' retention of complex narrative structures beyond 64k tokens
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses novels to test long-context understanding
Introduces TLDM benchmark for complex evaluation
Reveals LLMs struggle beyond 64k tokens
🔎 Similar Papers
No similar papers found.