🤖 AI Summary
Multi-voice literature reviews (MVLRs) face significant challenges in efficiently screening massive, heterogeneous, and information-sparse corpora—exemplified by context-aware software systems (CASS) testing in avionics, where manual review of over 8,000 publications incurs prohibitive costs. This paper proposes a domain-adapted, localized retrieval-augmented generation (RAG) framework that integrates large language models (LLMs) with structured prompt engineering to automatically identify non-relevant publications with high precision. Evaluated via convenience sampling and statistical validation on CASS-testing literature, the tool achieves a 90% agreement rate with human reviewers, substantially reducing manual screening effort. Key contributions include: (1) a reusable, lightweight, localized RAG toolchain; (2) balanced design addressing both academic rigor and engineering practicality; and (3) a verifiable methodological paradigm for LLM-enhanced evidence-based literature reviews.
📝 Abstract
Background: Conducting Multi Vocal Literature Reviews (MVLRs) is often time and effort-intensive. Researchers must review and filter a large number of unstructured sources, which frequently contain sparse information and are unlikely to be included in the final study. Our experience conducting an MVLR on Context-Aware Software Systems (CASS) Testing in the avionics domain exemplified this challenge, with over 8,000 highly heterogeneous documents requiring review. Therefore, we developed a Large Language Model (LLM) assistant to support the search and filtering of documents. Aims: To develop and validate an LLM based tool that can support researchers in performing the search and filtering of documents for an MVLR without compromising the rigor of the research protocol. Method: We applied sound engineering practices to develop an on-premises LLM-based tool incorporating Retrieval Augmented Generation (RAG) to process candidate sources. Progress towards the aim was quantified using the Positive Percent Agreement (PPA) as the primary metric to ensure the performance of the LLM based tool. Convenience sampling, supported by human judgment and statistical sampling, were used to verify and validate the tool's quality-in-use. Results: The tool currently demonstrates a PPA agreement with human researchers of 90% for sources that are not relevant to the study. Development details are shared to support domain-specific adaptation of the tool. Conclusions: Using LLM-based tools to support academic researchers in rigorous MVLR is feasible. These tools can free valuable time for higher-level, abstract tasks. However, researcher participation remains essential to ensure that the tool supports thorough research.