🤖 AI Summary
Genomic language models (gLMs) suffer from low cross-modal modeling efficiency: modality-specific models incur redundancy, while multimodal architectures demand excessive parameters and costly pretraining. This paper proposes CodonMoE—a lightweight, codon-level Mixture-of-Experts (MoE) adapter that enables efficient transfer of DNA language models to RNA analysis tasks without RNA-specific pretraining. Its core innovation is the first introduction of a codon-aware adaptive MoE mechanism, theoretically capable of approximating arbitrary mappings from codon sequences to RNA functional properties, thereby unifying DNA and RNA modeling. Integrated with subsequence feature reprogramming, CodonMoE achieves superior performance on four RNA prediction tasks—outperforming baseline DNA models and attaining state-of-the-art (SOTA) results—while using only 20% of the original model’s parameters.
📝 Abstract
Genomic language models (gLMs) face a fundamental efficiency challenge: either maintain separate specialized models for each biological modality (DNA and RNA) or develop large multi-modal architectures. Both approaches impose significant computational burdens - modality-specific models require redundant infrastructure despite inherent biological connections, while multi-modal architectures demand massive parameter counts and extensive cross-modality pretraining. To address this limitation, we introduce CodonMoE (Adaptive Mixture of Codon Reformative Experts), a lightweight adapter that transforms DNA language models into effective RNA analyzers without RNA-specific pretraining. Our theoretical analysis establishes CodonMoE as a universal approximator at the codon level, capable of mapping arbitrary functions from codon sequences to RNA properties given sufficient expert capacity. Across four RNA prediction tasks spanning stability, expression, and regulation, DNA models augmented with CodonMoE significantly outperform their unmodified counterparts, with HyenaDNA+CodonMoE series achieving state-of-the-art results using 80% fewer parameters than specialized RNA models. By maintaining sub-quadratic complexity while achieving superior performance, our approach provides a principled path toward unifying genomic language modeling, leveraging more abundant DNA data and reducing computational overhead while preserving modality-specific performance advantages.