A Comprehensive Survey on Long Context Language Modeling

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-context language models (LCLMs) face persistent bottlenecks in efficiency and interpretability across modeling, training, deployment, and evaluation. Method: This work introduces the first end-to-end paradigm spanning “model construction—training & deployment—evaluation & analysis” for LCLMs. It systematically integrates sparse attention, chunking- and memory-augmented architectures, sequence compression, efficient fine-tuning, and distributed inference, while establishing a unified evaluation framework and mechanistic interpretability analysis pipeline. Contribution/Results: We publish the field’s first comprehensive survey—widely adopted as a research reference; open-source the GitHub repository *LCLM-Horizon*, continuously integrating over 100 state-of-the-art works; and advance benchmark development (e.g., long-text QA and summarization), attribution analysis techniques, and community standardization—laying the foundation for an open, collaborative LCLM research ecosystem.

Technology Category

Application Category

📝 Abstract
Efficient processing of long contexts has been a persistent pursuit in Natural Language Processing. With the growing number of long documents, dialogues, and other textual data, it is important to develop Long Context Language Models (LCLMs) that can process and analyze extensive inputs in an effective and efficient way. In this paper, we present a comprehensive survey on recent advances in long-context modeling for large language models. Our survey is structured around three key aspects: how to obtain effective and efficient LCLMs, how to train and deploy LCLMs efficiently, and how to evaluate and analyze LCLMs comprehensively. For the first aspect, we discuss data strategies, architectural designs, and workflow approaches oriented with long context processing. For the second aspect, we provide a detailed examination of the infrastructure required for LCLM training and inference. For the third aspect, we present evaluation paradigms for long-context comprehension and long-form generation, as well as behavioral analysis and mechanism interpretability of LCLMs. Beyond these three key aspects, we thoroughly explore the diverse application scenarios where existing LCLMs have been deployed and outline promising future development directions. This survey provides an up-to-date review of the literature on long-context LLMs, which we wish to serve as a valuable resource for both researchers and engineers. An associated GitHub repository collecting the latest papers and repos is available at: href{https://github.com/LCLM-Horizon/A-Comprehensive-Survey-For-Long-Context-Language-Modeling}{color[RGB]{175,36,67}{LCLM-Horizon}}.
Problem

Research questions and friction points this paper is trying to address.

Efficient processing of long textual contexts
Training and deploying long context language models
Comprehensive evaluation of long-context model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data strategies for long context processing
Architectural designs for efficient LCLMs
Evaluation paradigms for long-context comprehension
J
Jiaheng Liu
D
Dawei Zhu
Z
Zhiqi Bai
Yancheng He
Yancheng He
Alibaba Group
LLM
Huanxuan Liao
Huanxuan Liao
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingLarge Language ModelLong Context Modeling
Haoran Que
Haoran Que
Beihang University
Z
Zekun Wang
C
Chenchen Zhang
G
Ge Zhang
J
Jiebin Zhang
Yuanxing Zhang
Yuanxing Zhang
Kuaishou Technology
Recommender SystemLarge Language ModelVideo Understanding
Z
Zhuo Chen
H
Hangyu Guo
Shilong Li
Shilong Li
University of California, Irvine
Software EngineeringAutonomous Driving Systems
Ziqiang Liu
Ziqiang Liu
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Natural Language ProcessingLarge Language Model
Y
Yong Shan
Y
Yifan Song
Jiayi Tian
Jiayi Tian
University of California, Santa Barbara
LLM Efficiency
W
Wenhao Wu
Zhejian Zhou
Zhejian Zhou
University of Southern California
Natural Language Processing
Ruijie Zhu
Ruijie Zhu
University of Science and Technology of China
3d vision
Junlan Feng
Junlan Feng
Chief Scientist at China Mobile Research
Natural LanguageMachine LearningSpeech ProcessingData Mining
Y
Yang Gao
S
Shizhu He
Zhoujun Li
Zhoujun Li
Beihang University
Artificial IntelligentNatural Language ProcessingNetwork Security
T
Tianyu Liu
F
Fanyu Meng
W
Wenbo Su
Y
Yingshui Tan
Zili Wang
Zili Wang
StepFun LLM Researcher & M-A-P
Large Language ModelsCode Intelligence
J
Jian Yang
W
Wei Ye
B
Bo Zheng
Wangchunshu Zhou
Wangchunshu Zhou
OPPO & M-A-P
artificial general intelligencelanguage agentslarge language modelsnatural language processing
W
Wenhao Huang
S
Sujian Li
Zhaoxiang Zhang
Zhaoxiang Zhang
Institute of Automation, Chinese Academy of Sciences
Computer VisionPattern RecognitionBiologically-inspired Learning