๐ค AI Summary
Existing benchmarks inadequately assess the real-world capabilities of DNA language models (DNALMs) on critical regulatory DNA downstream tasksโincluding functional sequence discovery, cell-type-specific activity prediction, and counterfactual variant impact inference. Method: We introduce DART-Eval, the first comprehensive, regulation-focused evaluation benchmark for DNALMs, supporting zero-shot, probe-based, and fine-tuning paradigms. It integrates self-supervised DNALMs, *ab initio* baselines, and multi-level standardized tasks with biologically grounded metrics. Contribution/Results: Systematic evaluation reveals that state-of-the-art DNALMs show no significant performance advantage over lightweight baselines across most regulatory tasks, while incurring high computational cost and exhibiting unstable behavior. To address this, we propose a biology-informed evaluation framework that clarifies concrete directions for next-generation model development. All code and evaluation protocols are open-sourced to promote reproducibility and standardization of genomic AI assessment.
๐ Abstract
Recent advances in self-supervised models for natural language, vision, and protein sequences have inspired the development of large genomic DNA language models (DNALMs). These models aim to learn generalizable representations of diverse DNA elements, potentially enabling various genomic prediction, interpretation and design tasks. Despite their potential, existing benchmarks do not adequately assess the capabilities of DNALMs on key downstream applications involving an important class of non-coding DNA elements critical for regulating gene activity. In this study, we introduce DART-Eval, a suite of representative benchmarks specifically focused on regulatory DNA to evaluate model performance across zero-shot, probed, and fine-tuned scenarios against contemporary ab initio models as baselines. Our benchmarks target biologically meaningful downstream tasks such as functional sequence feature discovery, predicting cell-type specific regulatory activity, and counterfactual prediction of the impacts of genetic variants. We find that current DNALMs exhibit inconsistent performance and do not offer compelling gains over alternative baseline models for most tasks, while requiring significantly more computational resources. We discuss potentially promising modeling, data curation, and evaluation strategies for the next generation of DNALMs. Our code is available at https://github.com/kundajelab/DART-Eval.