🤖 AI Summary
This work addresses the limitations of existing automatic survey generation (ASG) evaluation methods, which rely on generic metrics and are largely confined to computer science, thereby failing to capture cross-disciplinary variations in survey quality. To bridge this gap, the authors propose SurveyLens—the first multidisciplinary benchmark for ASG evaluation—comprising 1,000 human-written surveys spanning ten academic disciplines. They introduce a dual-perspective evaluation framework: one dimension assesses adherence to discipline-specific conventions through tailored scoring rubrics, while the other evaluates content coverage via Canonical Alignment Evaluation. Notably, the framework incorporates a discipline-aware large language model scoring mechanism and a multi-agent evaluation system. Experiments across eleven state-of-the-art ASG methods reveal significant performance disparities across disciplines, offering empirical guidance for users selecting appropriate tools.
📝 Abstract
The exponential growth of scientific literature has driven the evolution of Automatic Survey Generation (ASG) from simple pipelines to multi-agent frameworks and commercial Deep Research agents. However, current ASG evaluation methods rely on generic metrics and are heavily biased toward Computer Science (CS), failing to assess whether ASG methods adhere to the distinct standards of various academic disciplines. Consequently, researchers, especially those outside CS, lack clear guidance on using ASG systems to yield high-quality surveys compliant with specific discipline standards. To bridge this gap, we introduce SurveyLens, the first discipline-aware benchmark evaluating ASG methods across diverse research disciplines. We construct SurveyLens-1k, a curated dataset of 1,000 high-quality human-written surveys spanning 10 disciplines. Subsequently, we propose a dual-lens evaluation framework: (1) Discipline-Aware Rubric Evaluation, which utilizes LLMs with human preference-aligned weights to assess adherence to domain-specific writing standards; and (2) Canonical Alignment Evaluation to rigorously measure content coverage and synthesis quality against human-written survey papers. We conduct extensive experiments by evaluating 11 state-of-the-art ASG methods on SurveyLens, including Vanilla LLMs, ASG systems, and Deep Research agents. Our analysis reveals the distinct strengths and weaknesses of each paradigm across fields, providing essential guidance for selecting tools tailored to specific disciplinary requirements.