OPENXRD: A Comprehensive Benchmark and Enhancement Framework for LLM/MLLM XRD Question Answering

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) and multimodal LLMs (MLLMs) exhibit inaccurate reasoning in X-ray diffraction (XRD) question answering due to insufficient domain-specific crystallographic prior knowledge. Method: This paper introduces OPENXRD—the first open-source, open-book QA framework tailored for XRD. It leverages GPT-4.5 to generate copyright-free, high-fidelity, concise domain reference texts—replacing scanned textbook materials—to enable lightweight, effective knowledge injection. OPENXRD integrates vision-language models (e.g., LLaVA, Qwen) with a hybrid open-book/closed-book evaluation mechanism, supporting joint multimodal reasoning over XRD patterns and textual queries. Contribution/Results: Evaluated on 217 expert-curated XRD questions, OPENXRD substantially improves accuracy of small-scale models, empirically validating the efficacy and generalizability of AI-synthesized reference texts in scientific QA.

Technology Category

Application Category

📝 Abstract
This work presents OPENXRD, an open-book pipeline designed for crystallography question answering, which integrates textual prompts with concise supporting content generated by GPT-4.5. Instead of using scanned textbooks, which may lead to copyright issues, OPENXRD generates compact, domain-specific references that help smaller models understand key concepts in X-ray diffraction (XRD). We evaluate OPENXRD on a well-defined set of 217 expert-level XRD questions by comparing different vision-language models, including GPT-4 and LLaVA-based frameworks such as Mistral, LLaMA, and QWEN, under both closed-book (without supporting material) and open-book (with supporting material) conditions. Our experimental results show significant accuracy improvements in models that use the GPT-4.5-generated summaries, particularly those with limited prior training in crystallography. OPENXRD uses knowledge from larger models to fill knowledge gaps in crystallography and shows that AI-generated texts can help smaller models reason more effectively in scientific tasks. While the current version of OPENXRD focuses on text-based inputs, we also explore future extensions such as adding real crystal diagrams or diffraction patterns to improve interpretation in specialized materials science contexts. Overall, OPENXRD shows that specialized open-book systems can be useful in materials science and provides a foundation for broader natural language processing (NLP) tools in critical scientific fields.
Problem

Research questions and friction points this paper is trying to address.

Enhancing XRD question answering with AI-generated references
Improving small models' crystallography understanding via open-book approach
Evaluating vision-language models' accuracy with expert-level XRD questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-book pipeline with GPT-4.5-generated references
Evaluates vision-language models on XRD questions
Enhances smaller models with AI-generated summaries
🔎 Similar Papers
No similar papers found.