🤖 AI Summary
Traditional machine learning approaches to active learning in materials science suffer from cold-start limitations and heavy dependence on domain-specific feature engineering, resulting in poor generalizability. To address these challenges, this work proposes a novel, training-free, large language model (LLM)-based prompt-driven active learning paradigm that directly leverages raw material text descriptions and numerical properties for experimental recommendation—eliminating reliance on initial labeled data and handcrafted features. We design two complementary prompting strategies: concise numerical prompts and extended textual prompts, enabling cross-dataset applicability, robustness, and computational efficiency in candidate screening. Evaluated on four diverse materials datasets, our method converges to high-performance regions using fewer than 30% of total experiments—substantially outperforming conventional active learning baselines. Results demonstrate superior efficiency, stability, and reproducibility, establishing a scalable, feature-agnostic framework for accelerated materials discovery.
📝 Abstract
Active learning (AL) accelerates scientific discovery by prioritizing the most informative experiments, but traditional machine learning (ML) models used in AL suffer from cold-start limitations and domain-specific feature engineering, restricting their generalizability. Large language models (LLMs) offer a new paradigm by leveraging their pretrained knowledge and universal token-based representations to propose experiments directly from text-based descriptions. Here, we introduce an LLM-based active learning framework (LLM-AL) that operates in an iterative few-shot setting and benchmark it against conventional ML models across four diverse materials science datasets. We explored two prompting strategies: one using concise numerical inputs suited for datasets with more compositional and structured features, and another using expanded descriptive text suited for datasets with more experimental and procedural features to provide additional context. Across all datasets, LLM-AL could reduce the number of experiments needed to reach top-performing candidates by over 70% and consistently outperformed traditional ML models. We found that LLM-AL performs broader and more exploratory searches while still reaching the optima with fewer iterations. We further examined the stability boundaries of LLM-AL given the inherent non-determinism of LLMs and found its performance to be broadly consistent across runs, within the variability range typically observed for traditional ML approaches. These results demonstrate that LLM-AL can serve as a generalizable alternative to conventional AL pipelines for more efficient and interpretable experiment selection and potential LLM-driven autonomous discovery.