🤖 AI Summary
This work addresses the growing inefficiency in discovering, evaluating, and synthesizing academic literature due to its rapid expansion. The authors propose an end-to-end multi-agent framework that uniquely integrates multi-agent collaboration, structured knowledge graphs, and reproducible outputs. The system comprises two pipelines: literature discovery—featuring multi-source retrieval and diversity-aware ranking—and literature analysis—transforming papers into typed knowledge graphs to enable graph-aware question answering and coverage validation. It produces multiple structured output formats, including JSON, CSV, and BibTeX. Experimental results demonstrate significant improvements over baseline methods on retrieval and survey tasks, with consistent gains in Hit Rate, MRR, and Recall@K as model capabilities increase. The code and accompanying website are publicly released.
📝 Abstract
The rapid growth of scientific literature has made it increasingly difficult for researchers to efficiently discover, evaluate, and synthesize relevant work. Recent advances in multi-agent large language models (LLMs) have demonstrated strong potential for understanding user intent and are being trained to utilize various tools. In this paper, we introduce Paper Circle, a multi-agent research discovery and analysis system designed to reduce the effort required to find, assess, organize, and understand academic literature. The system comprises two complementary pipelines: (1) a Discovery Pipeline that integrates offline and online retrieval from multiple sources, multi-criteria scoring, diversity-aware ranking, and structured outputs; and (2) an Analysis Pipeline that transforms individual papers into structured knowledge graphs with typed nodes such as concepts, methods, experiments, and figures, enabling graph-aware question answering and coverage verification. Both pipelines are implemented within a coder LLM-based multi-agent orchestration framework and produce fully reproducible, synchronized outputs including JSON, CSV, BibTeX, Markdown, and HTML at each agent step. This paper describes the system architecture, agent roles, retrieval and scoring methods, knowledge graph schema, and evaluation interfaces that together form the Paper Circle research workflow. We benchmark Paper Circle on both paper retrieval and paper review generation, reporting hit rate, MRR, and Recall at K. Results show consistent improvements with stronger agent models. We have publicly released the website at https://papercircle.vercel.app/ and the code at https://github.com/MAXNORM8650/papercircle.