🤖 AI Summary
Training AI models for drug discovery—particularly protein language models (pLMs)—increasingly relies on large-scale GPU clusters, yet existing frameworks lack both efficiency and usability. Method: We introduce the first high-performance training framework tailored for biochemical AI, enabling scalable pLM development and deployment on hundred-GPU clusters. Built on PyTorch and Megatron-LM, it features a modular architecture supporting flexible integration of optimized data loading, distributed training, mixed-precision arithmetic, sequence parallelism, and high-throughput I/O. Contribution/Results: On 256 NVIDIA A100 GPUs, the framework completes pretraining of a 3-billion-parameter BERT-style pLM on over one trillion tokens in just 4.2 days—substantially lowering the barrier to large-scale pLM training. The framework is open-sourced, enhancing reproducibility and fostering collaborative innovation in computational biology.
📝 Abstract
Artificial Intelligence models encoding biology and chemistry are opening new routes to high-throughput and high-quality in-silico drug development. However, their training increasingly relies on computational scale, with recent protein language models (pLM) training on hundreds of graphical processing units (GPUs). We introduce the BioNeMo Framework to facilitate the training of computational biology and chemistry AI models across hundreds of GPUs. Its modular design allows the integration of individual components, such as data loaders, into existing workflows and is open to community contributions. We detail technical features of the BioNeMo Framework through use cases such as pLM pre-training and fine-tuning. On 256 NVIDIA A100s, BioNeMo Framework trains a three billion parameter BERT-based pLM on over one trillion tokens in 4.2 days. The BioNeMo Framework is open-source and free for everyone to use.