Unveiling Scaling Behaviors in Molecular Language Models: Effects of Model Size, Data, and Representation

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether molecular language models exhibit predictable scaling laws under a fixed computational budget, hindering optimal allocation of resources among model size, dataset scale, and molecular representation. This work systematically investigates these factors by training 300 GPT-based models and conducting over 10,000 experiments under strictly controlled compute constraints. We uncover consistent scaling laws for molecular language models in both pretraining and downstream tasks, demonstrate the critical influence of molecular string representations on performance, and resolve the apparent inconsistencies in prior scaling behaviors. To support future research, we release the largest publicly available library of molecular language models to date, establishing a foundational benchmark and resource for the community.

Technology Category

Application Category

📝 Abstract
Molecular generative models, often employing GPT-style language modeling on molecular string representations, have shown promising capabilities when scaled to large datasets and model sizes. However, it remains unclear and subject to debate whether these models adhere to predictable scaling laws under fixed computational budgets, which is a crucial understanding for optimally allocating resources between model size, data volume, and molecular representation. In this study, we systematically investigate the scaling behavior of molecular language models across both pretraining and downstream tasks. We train 300 models and conduct over 10,000 experiments, rigorously controlling compute budgets while independently varying model size, number of training tokens, and molecular representation. Our results demonstrate clear scaling laws in molecular models for both pretraining and downstream transfer, reveal the substantial impact of molecular representation on performance, and explain previously observed inconsistencies in scaling behavior for molecular generation. Additionally, we publicly release the largest library of molecular language models to date to facilitate future research and development. Code and models are available at https://github.com/SZU-ADDG/MLM-Scaling.
Problem

Research questions and friction points this paper is trying to address.

scaling laws
molecular language models
model size
data volume
molecular representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

scaling laws
molecular language models
model scaling
molecular representation
compute budget
🔎 Similar Papers
No similar papers found.
Dong Xu
Dong Xu
Shenzhen University
Artificial intelligenceDrug Design
Q
Qihua Pan
School of Artificial Intelligence, Shenzhen University, Shenzhen 518060, China
S
Sisi Yuan
School of Artificial Intelligence, Shenzhen University, Shenzhen 518060, China
Jianqiang Li
Jianqiang Li
Shenzhen University
CPSRoboticsInternet of Things
Zexuan Zhu
Zexuan Zhu
Shenzhen University
Evolutionary ComputationMemetic ComputingBioinformaticsMachine Learning
J
Junkai Ji
School of Artificial Intelligence, Shenzhen University, Shenzhen 518060, China