π€ AI Summary
Scarcity of long-text training data hinders large language modelsβ ability to model long-range dependencies. To address this, we propose Negative Document Expansion (NDE): a novel data synthesis framework that leverages meta-block decomposition and cross-document hard negative retrieval to interleave semantically related yet logically incoherent negative samples into the original text, thereby constructing high-difficulty synthetic long-context sequences. Subsequently, supervised long-range dependency discrimination training is employed to strengthen the modelβs capacity to identify genuine long-range relational structures. To our knowledge, NDE is the first systematic framework to exploit hard negatives for enhancing synthetic long-text generation. Experiments on the HELMET and RULER benchmarks demonstrate that NDE significantly outperforms existing synthetic-data approaches and mainstream models trained exclusively on authentic long documents, substantially reducing reliance on non-synthetic long-document corpora.
π Abstract
Large language models (LLMs) with extended context windows have made significant strides yet remain a challenge due to the scarcity of long documents. Existing methods tend to synthesize long-context data but lack a clear mechanism to reinforce the long-range dependency modeling. To address this limitation, we propose NExtLong, a novel framework for synthesizing long-context data through Negative document Extension. NExtLong decomposes a document into multiple meta-chunks and extends the context by interleaving hard negative distractors retrieved from pretraining corpora. This approach compels the model to discriminate long-range dependent context from distracting content, enhancing its ability to model long-range dependencies. Extensive experiments demonstrate that NExtLong achieves significant performance improvements on the HELMET and RULER benchmarks compared to existing long-context synthesis approaches and leading models, which are trained on non-synthetic long documents. These findings highlight NExtLong's ability to reduce reliance on non-synthetic long documents, making it an effective framework for developing advanced long-context LLMs.