๐ค AI Summary
This study addresses the lack of systematic evaluation benchmarks for general-purpose large language models (LLMs) on tasks involving raw genomic sequence understanding. To this end, the authors propose GenomeQA, the first controllable evaluation framework tailored to raw DNA sequences, which leverages multi-source biological databases to construct a diverse short-sequence dataset comprising 5,200 samples across six inference tasksโincluding enhancer identification, splice site prediction, and transcription factor binding site prediction. Experimental results demonstrate that six state-of-the-art LLMs significantly outperform random baselines and effectively exploit signals such as GC content and local sequence motifs. However, their performance markedly declines on tasks requiring multi-step reasoning, revealing both the promise and current limitations of LLMs in direct genomic interpretation.
๐ Abstract
Large Language Models (LLMs) are increasingly adopted as conversational assistants in genomics, where they are mainly used to reason over biological knowledge, annotations, and analysis outputs through natural language interfaces. However, existing benchmarks either focus on specialized DNA models trained for sequence prediction or evaluate biological knowledge using text-only questions, leaving the behavior of general-purpose LLMs when directly exposed to raw genome sequences underexplored. We introduce GenomeQA, a benchmark designed to provide a controlled evaluation setting for general-purpose LLMs on sequence-based genome inference tasks. GenomeQA comprises 5,200 samples drawn from multiple biological databases, with sequence lengths ranging from 6 to 1,000 base pairs (bp), spanning six task families: Enhancer and Promoter Identification, Splice Site Identification, Taxonomic Classification, Histone Mark Prediction, Transcription Factor Binding Site Prediction, and TF Motif Prediction. Across six frontier LLMs, we find that models consistently outperform random baselines and can exploit local sequence signals such as GC content and short motifs, while performance degrades on tasks that require more indirect or multi-step inference over sequence patterns. GenomeQA establishes a diagnostic benchmark for studying and improving the use of general-purpose LLMs on raw genomic sequences.