🤖 AI Summary
This study addresses the low accuracy of large language models (LLMs) in FAIR-compliance validation of biosample metadata. We propose a structured-knowledge-guided prompting method, integrating the CEDAR template repository, domain-specific data dictionaries, and GPT-4 to construct a metadata standards-conformance verification framework—demonstrated on human lung cancer biosamples. Experimental results show that incorporating structured knowledge significantly improves field-level standards compliance from 79% to 97% (p < 0.01), providing the first empirical evidence that structured knowledge bases can overcome performance bottlenecks inherent to purely text-based LLM prompting in metadata governance. Our approach establishes a novel paradigm for automated, high-accuracy, and interpretable FAIR metadata quality control, enabling scalable, standards-aware curation of biomedical metadata.
📝 Abstract
Metadata play a crucial role in ensuring the findability, accessibility, interoperability, and reusability of datasets. This paper investigates the potential of large language models (LLMs), specifically GPT-4, to improve adherence to metadata standards. We conducted experiments on 200 random data records describing human samples relating to lung cancer from the NCBI BioSample repository, evaluating GPT-4's ability to suggest edits for adherence to metadata standards. We computed the adherence accuracy of field name-field value pairs through a peer review process, and we observed a marginal average improvement in adherence to the standard data dictionary from 79% to 80% (p<0.5). We then prompted GPT-4 with domain information in the form of the textual descriptions of CEDAR templates and recorded a significant improvement to 97% from 79% (p<0.01). These results indicate that, while LLMs may not be able to correct legacy metadata to ensure satisfactory adherence to standards when unaided, they do show promise for use in automated metadata curation when integrated with a structured knowledge base