LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the critical problem of performance degradation in large language models (LLMs) during long-text generation—particularly in length compliance and information density—as output length increases. To systematically investigate this issue, we introduce LongEval, the first dual-paradigm benchmark evaluating both direct generation and plan-driven approaches for long-text synthesis, uncovering a universal length-induced performance decay pattern. Methodologically, we propose a cognition-inspired, plan-driven evaluation paradigm integrating task decomposition, stepwise planning, information density quantification, and multi-dimensional automated assessment. Empirical results demonstrate that smaller models, when specially trained for long-text generation, can match or surpass larger models—challenging the “bigger is better” consensus. We open-source the LongEval dataset, evaluation code, and the baseline model LongWriter. Plan-guided generation improves coherence and factual consistency by up to 17.2%.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success in various natural language processing tasks, yet their ability to generate long-form content remains poorly understood and evaluated. Our analysis reveals that current LLMs struggle with length requirements and information density in long-text generation, with performance deteriorating as text length increases. To quantitively locate such a performance degradation and provide further insights on model development, we present LongEval, a benchmark that evaluates long-text generation through both direct and plan-based generation paradigms, inspired by cognitive and linguistic writing models. The comprehensive experiments in this work reveal interesting findings such as that while model size correlates with generation ability, the small-scale model (e.g., LongWriter), well-trained on long texts, has comparable performance. All code and datasets are released in https://github.com/Wusiwei0410/LongEval.
Problem

Research questions and friction points this paper is trying to address.

Evaluate long-text generation in LLMs
Assess performance degradation with text length
Compare direct and plan-based generation paradigms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plan-based generation paradigm
LongEval benchmark framework
Small-scale model LongWriter
🔎 Similar Papers
No similar papers found.