🤖 AI Summary
This study evaluates the accuracy and comprehensiveness of large language models (LLMs) in answering complex, domain-specific questions about high-temperature copper-oxide superconductors, probing their capacity for expert-level scientific literature understanding. Method: We construct a domain-specific knowledge base comprising 1,726 peer-reviewed papers and curate a rigorous benchmark of 67 deep, multi-faceted questions. We propose an expert-designed, multidimensional evaluation framework assessing balance, factual completeness, conciseness, and evidential support. Further, we develop two retrieval-augmented systems—integrating retrieval-augmented generation (RAG) with multimodal (text-and-figure) retrieval. Contribution/Results: RAG-based systems significantly outperform closed-source baseline LLMs in factual coverage and evidence grounding, revealing both the emergent potential and critical limitations of current LLMs in scientific reasoning—particularly regarding domain-specific inference, citation fidelity, and multimodal evidence integration.
📝 Abstract
Large Language Models (LLMs) show great promise as a powerful tool for scientific literature exploration. However, their effectiveness in providing scientifically accurate and comprehensive answers to complex questions within specialized domains remains an active area of research. Using the field of high-temperature cuprates as an exemplar, we evaluate the ability of LLM systems to understand the literature at the level of an expert. We construct an expert-curated database of 1,726 scientific papers that covers the history of the field, and a set of 67 expert-formulated questions that probe deep understanding of the literature. We then evaluate six different LLM-based systems for answering these questions, including both commercially available closed models and a custom retrieval-augmented generation (RAG) system capable of retrieving images alongside text. Experts then evaluate the answers of these systems against a rubric that assesses balanced perspectives, factual comprehensiveness, succinctness, and evidentiary support. Among the six systems two using RAG on curated literature outperformed existing closed models across key metrics, particularly in providing comprehensive and well-supported answers. We discuss promising aspects of LLM performances as well as critical short-comings of all the models. The set of expert-formulated questions and the rubric will be valuable for assessing expert level performance of LLM based reasoning systems.