Agentic AutoSurvey: Let LLMs Survey LLMs

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The explosive growth of scientific literature severely hinders researchers’ ability to efficiently synthesize knowledge in emerging domains. To address this challenge, we propose the first multi-agent framework specifically designed for automated academic survey generation. It comprises four specialized agents—retrieval, clustering, writing, and evaluation—enabling end-to-end survey synthesis. The framework integrates large language models (LLMs) with scholarly search, topic modeling, structured writing, and multidimensional quality assessment, supported by a novel 12-dimensional evaluation metric. Empirical evaluation across six cutting-edge LLM research topics demonstrates that our generated surveys achieve an average quality score of 8.18/10—significantly outperforming baseline methods (4.77/10)—while covering 847 papers and achieving >80% citation accuracy for key works. The framework substantially enhances logical coherence, integrative depth, and critical analytical capability in survey generation.

Technology Category

Application Category

📝 Abstract
The exponential growth of scientific literature poses unprecedented challenges for researchers attempting to synthesize knowledge across rapidly evolving fields. We present extbf{Agentic AutoSurvey}, a multi-agent framework for automated survey generation that addresses fundamental limitations in existing approaches. Our system employs four specialized agents (Paper Search Specialist, Topic Mining & Clustering, Academic Survey Writer, and Quality Evaluator) working in concert to generate comprehensive literature surveys with superior synthesis quality. Through experiments on six representative LLM research topics from COLM 2024 categories, we demonstrate that our multi-agent approach achieves significant improvements over existing baselines, scoring 8.18/10 compared to AutoSurvey's 4.77/10. The multi-agent architecture processes 75--443 papers per topic (847 total across six topics) while targeting high citation coverage (often $geq$80% on 75--100-paper sets; lower on very large sets such as RLHF) through specialized agent orchestration. Our 12-dimension evaluation captures organization, synthesis integration, and critical analysis beyond basic metrics. These findings demonstrate that multi-agent architectures represent a meaningful advancement for automated literature survey generation in rapidly evolving scientific domains.
Problem

Research questions and friction points this paper is trying to address.

Addresses the challenge of synthesizing knowledge from rapidly growing scientific literature
Overcomes limitations of existing automated survey generation methods
Generates comprehensive literature surveys with superior synthesis quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework with specialized roles
Automated survey generation from literature
Superior synthesis quality through agent orchestration
🔎 Similar Papers
No similar papers found.