A Cognitive Writing Perspective for Constrained Long-Form Text Generation

πŸ“… 2025-02-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of generating high-quality, strongly constrained long-form text in a single pass using large language models (LLMs), this paper proposes CogWriterβ€”a training-free cognitive writing framework inspired by cognitive writing theory. Methodologically, it establishes a human-like writing loop comprising hierarchical planning, parallel generation, dynamic reviewing, and iterative revision: a planning agent decomposes tasks into multi-granularity subtasks; multiple generation agents produce content in parallel; and real-time monitoring, requirement alignment evaluation, and automated revision ensure fidelity and coherence. Its key contribution is the first systematic integration of cognitive writing theory into LLM-based long-text generation, yielding a modular, training-free, and scalable cognitive writing paradigm that overcomes the single-decoding bottleneck. Evaluated on LongGenBench with Qwen-2.5-14B as the base model, CogWriter achieves a 22% higher complex instruction completion accuracy than GPT-4o and reliably generates high-quality, constraint-compliant texts exceeding 10,000 words.

Technology Category

Application Category

πŸ“ Abstract
Like humans, Large Language Models (LLMs) struggle to generate high-quality long-form text that adheres to strict requirements in a single pass. This challenge is unsurprising, as successful human writing, according to the Cognitive Writing Theory, is a complex cognitive process involving iterative planning, translating, reviewing, and monitoring. Motivated by these cognitive principles, we aim to equip LLMs with human-like cognitive writing capabilities through CogWriter, a novel training-free framework that transforms LLM constrained long-form text generation into a systematic cognitive writing paradigm. Our framework consists of two key modules: (1) a Planning Agent that performs hierarchical planning to decompose the task, and (2) multiple Generation Agents that execute these plans in parallel. The system maintains quality via continuous monitoring and reviewing mechanisms, which evaluate outputs against specified requirements and trigger necessary revisions. CogWriter demonstrates exceptional performance on LongGenBench, a benchmark for complex constrained long-form text generation. Even when using Qwen-2.5-14B as its backbone, CogWriter surpasses GPT-4o by 22% in complex instruction completion accuracy while reliably generating texts exceeding 10,000 words. We hope this cognitive science-inspired approach provides a paradigm for LLM writing advancements: href{https://github.com/KaiyangWan/CogWriter}{CogWriter}.
Problem

Research questions and friction points this paper is trying to address.

Enhance long-form text generation in LLMs
Implement cognitive writing processes in AI
Improve adherence to complex text requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework for text generation
Hierarchical planning with Planning Agent
Parallel execution by Generation Agents