🤖 AI Summary
This work addresses the critical limitation of large language models (LLMs) in accurately extracting and ranking multiple embedded key facts (“needles”) within long contexts. To this end, we introduce the first long-context benchmark specifically designed for sequential information extraction, supporting context lengths from 8K to 128K tokens. Methodologically, we propose a synthetic evaluation framework with verifiable temporal and logical ordering constraints, incorporating three “needle” generation paradigms—synthetic, real-world, and open-domain—and integrating multi-scale sampling and noise-robustness testing. Comprehensive experiments across six state-of-the-art LLMs reveal a dual degradation effect: performance declines significantly with increasing context length and needle count. The best-performing model achieves only 63.15% accuracy, whereas a self-supervised evaluation model attains 99.49% on the synthetic set—demonstrating the benchmark’s high reliability and strong discriminative power.
📝 Abstract
Evaluating the ability of large language models (LLMs) to handle extended contexts is critical, particularly for retrieving information relevant to specific queries embedded within lengthy inputs. We introduce Sequential-NIAH, a benchmark specifically designed to evaluate the capability of LLMs to extract sequential information items (known as needles) from long contexts. The benchmark comprises three types of needle generation pipelines: synthetic, real, and open-domain QA. It includes contexts ranging from 8K to 128K tokens in length, with a dataset of 14,000 samples (2,000 reserved for testing). To facilitate evaluation on this benchmark, we trained a synthetic data-driven evaluation model capable of evaluating answer correctness based on chronological or logical order, achieving an accuracy of 99.49% on synthetic test data. We conducted experiments on six well-known LLMs, revealing that even the best-performing model achieved a maximum accuracy of only 63.15%. Further analysis highlights the growing challenges posed by increasing context lengths and the number of needles, underscoring substantial room for improvement. Additionally, noise robustness experiments validate the reliability of the benchmark, making Sequential-NIAH an important reference for advancing research on long text extraction capabilities of LLMs.