How Well Do Large-Scale Chemical Language Models Transfer to Downstream Tasks?

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether scaling model size, data volume, and training compute consistently improves molecular property prediction performance in chemistry. By systematically pretraining a series of chemical language models and evaluating their transfer capabilities on a multitask downstream benchmark, the authors uncover a significant disconnect between pretraining metrics and downstream performance. Despite continual reductions in pretraining loss, downstream results plateau or even degrade. Through analyses of pretraining loss landscapes, Hessian spectra, and parameter space visualizations, the work demonstrates that conventional proxy metrics fail to reliably predict transfer effectiveness. The findings underscore the necessity of selecting and evaluating models based on downstream task characteristics rather than relying solely on pretraining objectives.

Technology Category

Application Category

📝 Abstract
Chemical Language Models (CLMs) pre-trained on large scale molecular data are widely used for molecular property prediction. However, the common belief that increasing training resources such as model size, dataset size, and training compute improves both pretraining loss and downstream task performance has not been systematically validated in the chemical domain. In this work, we evaluate this assumption by pretraining CLMs while scaling training resources and measuring transfer performance across diverse molecular property prediction (MPP) tasks. We find that while pretraining loss consistently decreases with increased training resources, downstream task performance shows limited improvement. Moreover, alternative metrics based on the Hessian or loss landscape also fail to estimate downstream performance in CLMs. We further identify conditions under which downstream performance saturates or degrades despite continued improvements in pretraining metrics, and analyze the underlying task dependent failure modes through parameter space visualizations. These results expose a gap between pretraining based evaluation and downstream performance, and emphasize the need for model selection and evaluation strategies that explicitly account for downstream task characteristics.
Problem

Research questions and friction points this paper is trying to address.

Chemical Language Models
downstream task performance
pretraining loss
molecular property prediction
transfer learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chemical Language Models
Downstream Task Transfer
Scaling Laws
Loss Landscape
Molecular Property Prediction
🔎 Similar Papers
No similar papers found.