🤖 AI Summary
Existing research lacks standardized evaluation datasets, hindering rigorous comparison of large language model (LLM)-generated versus human-written scientific surveys. To address this, we introduce SurveyGen—the first large-scale, cross-domain scientific survey dataset with quality-annotated metadata, comprising over 4,200 surveys and 240,000 citation relations. We further propose QUAL-SG, a quality-aware framework for retrieval-augmented generation (RAG) that explicitly models文献 quality during both citation selection and survey synthesis. Through multi-level human evaluation and LLM-assisted assessment, our experiments show that semi-automated surveys achieve structural coherence and topical coverage comparable to human-authored ones; however, fully automated variants still suffer from low-citation quality and insufficient critical analysis. This work establishes a new benchmark, methodology, and systematic evaluation paradigm for scientific survey generation.
📝 Abstract
Automatic survey generation has emerged as a key task in scientific document processing. While large language models (LLMs) have shown promise in generating survey texts, the lack of standardized evaluation datasets critically hampers rigorous assessment of their performance against human-written surveys. In this work, we present SurveyGen, a large-scale dataset comprising over 4,200 human-written surveys across diverse scientific domains, along with 242,143 cited references and extensive quality-related metadata for both the surveys and the cited papers. Leveraging this resource, we build QUAL-SG, a novel quality-aware framework for survey generation that enhances the standard Retrieval-Augmented Generation (RAG) pipeline by incorporating quality-aware indicators into literature retrieval to assess and select higher-quality source papers. Using this dataset and framework, we systematically evaluate state-of-the-art LLMs under varying levels of human involvement - from fully automatic generation to human-guided writing. Experimental results and human evaluations show that while semi-automatic pipelines can achieve partially competitive outcomes, fully automatic survey generation still suffers from low citation quality and limited critical analysis.