The Shiny Scary Future of Automated Research Synthesis in HCI

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses reliability concerns and human-centered boundaries in the automated application of large language models (LLMs) for systematic literature reviews (SLRs) in Human-Computer Interaction (HCI). Method: Through empirical LLM experiments, human-AI collaborative workflow design, and HCI methodology analysis, it systematically delineates review stages amenable to automation (e.g., initial screening) versus those requiring human agency (e.g., thematic modeling, cross-study inference). Contribution/Results: The study proposes the “human-centered augmentation” ethical framework, rigorously defining LLMs’ capabilities and limitations in research synthesis. It delivers actionable, rigor-preserving guidelines for SLR practitioners and has been selected for a spotlight discussion at CHI ’25.

Technology Category

Application Category

📝 Abstract
Automation and semi-automation through computational tools like LLMs are also making their way to deployment in research synthesis and secondary research, such as systematic reviews. In some steps of research synthesis, this has the opportunity to provide substantial benefits by saving time that previously was spent on repetitive tasks. The screening stages in particular may benefit from carefully vetted computational support. However, this position paper argues for additional caution when bringing in such tools to the analysis and synthesis phases, where human judgement and expertise should be paramount throughout the process.
Problem

Research questions and friction points this paper is trying to address.

LLMs Assistance
Research Integrity
Human Expertise Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Systematic Review
Human Expertise Integration
🔎 Similar Papers
No similar papers found.