🤖 AI Summary
This work addresses the limited performance of large language models (LLMs) in molecular property prediction tasks, which hinders their practical utility in drug discovery. To bridge this gap, the authors propose TreeKD, a novel framework that, for the first time, translates interpretable rules learned by functional-group-based tree ensemble models—such as decision trees and random forests—into natural language. These rule-derived explanations are then integrated into LLMs via contextual injection and a test-time rule-consistency ensembling strategy, effectively enabling knowledge distillation and rule-augmented inference. Evaluated on 22 ADMET property prediction tasks from the Therapeutics Data Commons (TDC) benchmark, TreeKD substantially enhances LLM performance and significantly narrows the accuracy gap between LLMs and state-of-the-art expert models.
📝 Abstract
Molecular Property Prediction (MPP) is a central task in drug discovery. While Large Language Models (LLMs) show promise as generalist models for MPP, their current performance remains below the threshold for practical adoption. We propose TreeKD, a novel knowledge distillation method that transfers complementary knowledge from tree-based specialist models into LLMs. Our approach trains specialist decision trees on functional group features, then verbalizes their learned predictive rules as natural language to enable rule-augmented context learning. This enables LLMs to leverage structural insights that are difficult to extract from SMILES strings alone. We further introduce rule-consistency, a test-time scaling technique inspired by bagging that ensembles predictions across diverse rules from a Random Forest. Experiments on 22 ADMET properties from the TDC benchmark demonstrate that TreeKD substantially improves LLM performance, narrowing the gap with SOTA specialist models and advancing toward practical generalist models for molecular property prediction.