🤖 AI Summary
Existing methods struggle to simultaneously achieve scalability in chemical space exploration and high accuracy in predicting molecular atomic, thermodynamic, and kinetic properties—hindering materials innovation. To address this, we propose the MIST family of molecular foundation models, introducing the first tokenization scheme that jointly encodes nuclear, electronic, and geometric information. We further develop a hyperparameter-penalized Bayesian neural scaling law that reduces training cost by an order of magnitude while uncovering spontaneously emergent, scientifically interpretable regularities within the model. Leveraging self-supervised learning and mechanistic interpretability analysis, MIST achieves state-of-the-art performance across 400+ tasks spanning physiology, electrochemistry, and quantum chemistry. It demonstrates practical utility in real-world applications including electrolyte screening, odor modeling, and isotope half-life prediction.
📝 Abstract
Accurate prediction of atomistic, thermodynamic, and kinetic properties from molecular structures underpins materials innovation. Existing computational and experimental approaches lack the scalability required to efficiently navigate chemical space. Scientific foundation models trained on large unlabeled datasets offer a path toward exploring chemical space across diverse application domains. Here we develop MIST, a family of molecular foundation models with up to an order of magnitude more parameters and data than prior works. Trained using a novel tokenization scheme that comprehensively captures nuclear, electronic, and geometric information, MIST learns from a diverse range of molecules. MIST models have been fine-tuned to predict more than 400 structure -- property relationships and match or exceed state-of-the-art performance across benchmarks spanning physiology, electrochemistry, and quantum chemistry. We demonstrate the ability of these models to solve real-world problems across chemical space, including multiobjective electrolyte solvent screening, olfactory perception mapping, isotope half-life prediction, stereochemical reasoning for chiral organometallic compounds, and binary and multi-component mixture property prediction. Probing MIST models using mechanistic interpretability methods reveals identifiable patterns and trends not explicitly present in the training data, suggesting that the models learn generalizable scientific concepts. We formulate hyperparameter-penalized Bayesian neural scaling laws and use them to reduce the computational cost of model development by an order of magnitude. The methods and findings presented here represent a significant step toward accelerating materials discovery, design, and optimization using foundation models and provide valuable guidance for training compute-optimal scientific foundation models.