GenomeQA: Benchmarking General Large Language Models for Genome Sequence Understanding

๐Ÿ“… 2026-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the lack of systematic evaluation benchmarks for general-purpose large language models (LLMs) on tasks involving raw genomic sequence understanding. To this end, the authors propose GenomeQA, the first controllable evaluation framework tailored to raw DNA sequences, which leverages multi-source biological databases to construct a diverse short-sequence dataset comprising 5,200 samples across six inference tasksโ€”including enhancer identification, splice site prediction, and transcription factor binding site prediction. Experimental results demonstrate that six state-of-the-art LLMs significantly outperform random baselines and effectively exploit signals such as GC content and local sequence motifs. However, their performance markedly declines on tasks requiring multi-step reasoning, revealing both the promise and current limitations of LLMs in direct genomic interpretation.
๐Ÿ“ Abstract
Large Language Models (LLMs) are increasingly adopted as conversational assistants in genomics, where they are mainly used to reason over biological knowledge, annotations, and analysis outputs through natural language interfaces. However, existing benchmarks either focus on specialized DNA models trained for sequence prediction or evaluate biological knowledge using text-only questions, leaving the behavior of general-purpose LLMs when directly exposed to raw genome sequences underexplored. We introduce GenomeQA, a benchmark designed to provide a controlled evaluation setting for general-purpose LLMs on sequence-based genome inference tasks. GenomeQA comprises 5,200 samples drawn from multiple biological databases, with sequence lengths ranging from 6 to 1,000 base pairs (bp), spanning six task families: Enhancer and Promoter Identification, Splice Site Identification, Taxonomic Classification, Histone Mark Prediction, Transcription Factor Binding Site Prediction, and TF Motif Prediction. Across six frontier LLMs, we find that models consistently outperform random baselines and can exploit local sequence signals such as GC content and short motifs, while performance degrades on tasks that require more indirect or multi-step inference over sequence patterns. GenomeQA establishes a diagnostic benchmark for studying and improving the use of general-purpose LLMs on raw genomic sequences.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Genome Sequence Understanding
Benchmarking
Genomic Inference
Sequence-based Tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

GenomeQA
large language models
genome sequence understanding
benchmarking
sequence-based inference
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Weicai Long
Hong Kong University of Science and Technology (Guangzhou)
Y
Yusen Hou
Hong Kong University of Science and Technology (Guangzhou)
J
Junning Feng
Hong Kong University of Science and Technology (Guangzhou)
H
Houcheng Su
Hong Kong University of Science and Technology (Guangzhou)
Shuo Yang
Shuo Yang
The University of Hong Kong
D
Donglin Xie
Peking University
Y
Yanlin Zhang
Hong Kong University of Science and Technology (Guangzhou)