🤖 AI Summary
Current long-form article generation (LFAG) suffers from logical inconsistency, incomplete topic coverage, and narrative incoherence—primarily due to the absence of hierarchically structured datasets with fine-grained annotations. To address this, we propose DeFine: the first hierarchical, multi-level fine-grained dataset for LFAG, featuring three-dimensional annotations—logical chain validity, topic coverage completeness, and narrative coherence. DeFine is constructed via a multi-agent collaborative pipeline integrating domain-knowledge injection, citation retrieval, question-answering–based annotation, and rigorous data cleaning. Leveraging DeFine, we fine-tune Qwen2-7B-Instruct and design three retrieval baselines: network-based, local, and anchor-based. Experiments demonstrate substantial improvements in topic coverage breadth, informational depth, and content fidelity. The DeFine dataset is publicly released, establishing a new benchmark and enabling standardized evaluation for LFAG research.
📝 Abstract
Long-form article generation (LFAG) presents challenges such as maintaining logical consistency, comprehensive topic coverage, and narrative coherence across extended articles. Existing datasets often lack both the hierarchical structure and fine-grained annotation needed to effectively decompose tasks, resulting in shallow, disorganized article generation. To address these limitations, we introduce DeFine, a Decomposed and Fine-grained annotated dataset for long-form article generation. DeFine is characterized by its hierarchical decomposition strategy and the integration of domain-specific knowledge with multi-level annotations, ensuring granular control and enhanced depth in article generation. To construct the dataset, a multi-agent collaborative pipeline is proposed, which systematically segments the generation process into four parts: Data Miner, Cite Retreiver, Q&A Annotator and Data Cleaner. To validate the effectiveness of DeFine, we designed and tested three LFAG baselines: the web retrieval, the local retrieval, and the grounded reference. We fine-tuned the Qwen2-7b-Instruct model using the DeFine training dataset. The experimental results showed significant improvements in text quality, specifically in topic coverage, depth of information, and content fidelity. Our dataset publicly available to facilitate future research.