SciHorizon-GENE: Benchmarking LLM for Life Sciences Inference from Gene Knowledge to Functional Understanding

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack systematic evaluation of their ability to reliably reason from gene-centric knowledge to functional understanding, limiting their safe deployment in biological interpretation tasks such as cell atlas annotation. To address this gap, this work introduces SciHorizon-GENE, a large-scale gene-centered benchmark integrating authoritative knowledge on over 190,000 human genes and comprising more than 540,000 gene-to-function reasoning questions. For the first time, LLMs are evaluated across four biologically critical dimensions: sensitivity to research attention bias, hallucination propensity, answer completeness, and alignment with literature impact. The study reveals significant performance gaps and core failure modes between general-purpose and biomedical LLMs in gene-level reasoning, providing empirical foundations for informed model selection, development, and the design of knowledge-enhanced systems for biological interpretation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown growing promise in biomedical research, particularly for knowledge-driven interpretation tasks. However, their ability to reliably reason from gene-level knowledge to functional understanding, a core requirement for knowledge-enhanced cell atlas interpretation, remains largely underexplored. To address this gap, we introduce SciHorizon-GENE, a large-scale gene-centric benchmark constructed from authoritative biological databases. The benchmark integrates curated knowledge for over 190K human genes and comprises more than 540K questions covering diverse gene-to-function reasoning scenarios relevant to cell type annotation, functional interpretation, and mechanism-oriented analysis. Motivated by behavioral patterns observed in preliminary examinations, SciHorizon-GENE evaluates LLMs along four biologically critical perspectives: research attention sensitivity, hallucination tendency, answer completeness, and literature influence, explicitly targeting failure modes that limit the safe adoption of LLMs in biological interpretation pipelines. We systematically evaluate a wide range of state-of-the-art general-purpose and biomedical LLMs, revealing substantial heterogeneity in gene-level reasoning capabilities and persistent challenges in generating faithful, complete, and literature-grounded functional interpretations. Our benchmark establishes a systematic foundation for analyzing LLM behavior at the gene scale and offers insights for model selection and development, with direct relevance to knowledge-enhanced biological interpretation.
Problem

Research questions and friction points this paper is trying to address.

large language models
gene-to-function reasoning
biomedical interpretation
cell atlas
functional understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

gene-centric benchmark
large language models
functional interpretation
hallucination evaluation
biomedical reasoning
🔎 Similar Papers
No similar papers found.