🤖 AI Summary
Prior LLM applications in literature review suffer from low output credibility, high manual verification costs, and fragmented toolchains. Method: Through interdisciplinary user studies, we distilled core requirements and formulated six actionable design goals. We propose a novel “stepwise-verifiable” human-AI collaboration framework integrating generative explanations, human-AI feedback alignment, and citation-network visualization—transforming LLMs from opaque tools into trustworthy collaborators. Our approach combines qualitative analysis, interactive visual design, and generation-guided explanatory mechanisms to derive design principles and a high-level architecture tailored to real-world scholarly review tasks. Contribution/Results: Evaluation demonstrates that our framework significantly reduces verification effort, enhances output credibility, and improves collaborative efficiency—establishing a foundation for reliable, human-centered LLM-assisted literature synthesis.
📝 Abstract
Large Language Models (LLMs) are increasingly embedded in academic writing practices. Although numerous studies have explored how researchers employ these tools for scientific writing, their concrete implementation, limitations, and design challenges within the literature review process remain underexplored. In this paper, we report a user study with researchers across multiple disciplines to characterize current practices, benefits, and extit{pain points} in using LLMs to investigate related work. We identified three recurring gaps: (i) lack of trust in outputs, (ii) persistent verification burden, and (iii) requiring multiple tools. This motivates our proposal of six design goals and a high-level framework that operationalizes them through improved related papers visualization, verification at every step, and human-feedback alignment with generation-guided explanations. Overall, by grounding our work in the practical, day-to-day needs of researchers, we designed a framework that addresses these limitations and models real-world LLM-assisted writing, advancing trust through verifiable actions and fostering practical collaboration between researchers and AI systems.