JanusDNA: A Powerful Bi-directional Hybrid DNA Foundation Model

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing language models struggle to capture long-range dependencies in DNA sequences (>10 kb), and their unidirectional pretraining paradigm contradicts the inherently bidirectional nature of genomic regulatory mechanisms. Method: We introduce the first genome-scale bidirectional foundation model, unifying autoregressive efficiency with bidirectional understanding from masked modeling—establishing the first bidirectional DNA modeling paradigm. Our architecture integrates hybrid Mamba-Attention modules, sparse Mixture-of-Experts (MoE) layers, and single-GPU processing of megabase-scale sequences at single-nucleotide resolution. Contribution/Results: The model achieves new state-of-the-art performance on three major genomic representation benchmarks, outperforming a baseline model with 250× more parameters. It significantly improves both computational efficiency and biological plausibility, enabling accurate, scalable, and interpretable long-range genomic sequence modeling.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have revolutionized natural language processing and are increasingly applied to other sequential data types, including genetic sequences. However, adapting LLMs to genomics presents significant challenges. Capturing complex genomic interactions requires modeling long-range dependencies within DNA sequences, where interactions often span over 10,000 base pairs, even within a single gene, posing substantial computational burdens under conventional model architectures and training paradigms. Moreover, standard LLM training approaches are suboptimal for DNA: autoregressive training, while efficient, supports only unidirectional understanding. However, DNA is inherently bidirectional, e.g., bidirectional promoters regulate transcription in both directions and account for nearly 11% of human gene expression. Masked language models (MLMs) allow bidirectional understanding but are inefficient, as only masked tokens contribute to the loss per step. To address these limitations, we introduce JanusDNA, the first bidirectional DNA foundation model built upon a novel pretraining paradigm that combines the optimization efficiency of autoregressive modeling with the bidirectional comprehension of masked modeling. JanusDNA adopts a hybrid Mamba, Attention and Mixture of Experts (MoE) architecture, combining long-range modeling of Attention with efficient sequential learning of Mamba. MoE layers further scale model capacity via sparse activation while keeping computational cost low. Notably, JanusDNA processes up to 1 million base pairs at single nucleotide resolution on a single 80GB GPU. Extensive experiments and ablations show JanusDNA achieves new SOTA results on three genomic representation benchmarks, outperforming models with 250x more activated parameters. Code: https://github.com/Qihao-Duan/JanusDNA
Problem

Research questions and friction points this paper is trying to address.

Modeling long-range dependencies in DNA sequences efficiently
Enabling bidirectional understanding of DNA sequences effectively
Combining autoregressive and masked modeling for genomic analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Mamba-Attention-MoE architecture for DNA
Bidirectional pretraining combining autoregressive and masked modeling
Processes 1 million base pairs on single GPU
🔎 Similar Papers
No similar papers found.