DART-Eval: A Comprehensive DNA Language Model Evaluation Benchmark on Regulatory DNA

๐Ÿ“… 2024-12-06
๐Ÿ›๏ธ Neural Information Processing Systems
๐Ÿ“ˆ Citations: 7
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing benchmarks inadequately assess the real-world capabilities of DNA language models (DNALMs) on critical regulatory DNA downstream tasksโ€”including functional sequence discovery, cell-type-specific activity prediction, and counterfactual variant impact inference. Method: We introduce DART-Eval, the first comprehensive, regulation-focused evaluation benchmark for DNALMs, supporting zero-shot, probe-based, and fine-tuning paradigms. It integrates self-supervised DNALMs, *ab initio* baselines, and multi-level standardized tasks with biologically grounded metrics. Contribution/Results: Systematic evaluation reveals that state-of-the-art DNALMs show no significant performance advantage over lightweight baselines across most regulatory tasks, while incurring high computational cost and exhibiting unstable behavior. To address this, we propose a biology-informed evaluation framework that clarifies concrete directions for next-generation model development. All code and evaluation protocols are open-sourced to promote reproducibility and standardization of genomic AI assessment.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in self-supervised models for natural language, vision, and protein sequences have inspired the development of large genomic DNA language models (DNALMs). These models aim to learn generalizable representations of diverse DNA elements, potentially enabling various genomic prediction, interpretation and design tasks. Despite their potential, existing benchmarks do not adequately assess the capabilities of DNALMs on key downstream applications involving an important class of non-coding DNA elements critical for regulating gene activity. In this study, we introduce DART-Eval, a suite of representative benchmarks specifically focused on regulatory DNA to evaluate model performance across zero-shot, probed, and fine-tuned scenarios against contemporary ab initio models as baselines. Our benchmarks target biologically meaningful downstream tasks such as functional sequence feature discovery, predicting cell-type specific regulatory activity, and counterfactual prediction of the impacts of genetic variants. We find that current DNALMs exhibit inconsistent performance and do not offer compelling gains over alternative baseline models for most tasks, while requiring significantly more computational resources. We discuss potentially promising modeling, data curation, and evaluation strategies for the next generation of DNALMs. Our code is available at https://github.com/kundajelab/DART-Eval.
Problem

Research questions and friction points this paper is trying to address.

Evaluating DNA language models on regulatory DNA tasks
Assessing model performance in functional sequence discovery
Comparing DNALMs with baseline models on computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces DART-Eval for regulatory DNA evaluation
Assesses zero-shot, probed, fine-tuned scenarios
Compares DNALMs against ab initio baselines
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Aman Patel
Department of Computer Science, School of Engineering, Stanford University
A
Arpita Singhal
Department of Computer Science, School of Engineering, Stanford University
A
Austin Wang
Department of Computer Science, School of Engineering, Stanford University
Anusri Pampari
Anusri Pampari
Stanford University
Machine LearningRegulatory GenomicsNLP
M
Maya Kasowski
Department of Genetics, School of Medicine, Stanford University; Department of Pathology, School of Medicine, Stanford University
Anshul Kundaje
Anshul Kundaje
Associate Professor, Dept. of Genetics, Dept. of Computer Science, Stanford University
Computational BiologyBioinformaticsGenomicsSequencingApplied Machine Learning