🤖 AI Summary
The vast compositional space of double perovskites (DPs) hinders conditional generation of materials tailored to specific functional properties.
Method: We propose a multi-agent text-gradient-driven framework—the first to integrate knowledge-guided text gradients into materials generation—by synergistically combining large language model (LLM)-based self-assessment, domain-specific rule constraints, and feedback from machine learning surrogate models, enabling efficient, natural-language-conditioned co-optimization without additional training data.
Contribution/Results: Our method iteratively refines generation stability and validity via multi-source feedback. Experiments show >98% compositional validity, with 54% of candidates predicted as stable or metastable—substantially outperforming pure LLM baselines (43%) and GAN-based approaches (27%). It demonstrates robust generalization both in-distribution and out-of-distribution, establishing an interpretable, scalable paradigm for renewable-energy material discovery.
📝 Abstract
Double perovskites (DPs) are promising candidates for sustainable energy technologies due to their compositional tunability and compatibility with low-energy fabrication, yet their vast design space poses a major challenge for conditional materials discovery. This work introduces a multi-agent, text gradient-driven framework that performs DP composition generation under natural-language conditions by integrating three complementary feedback sources: LLM-based self-evaluation, DP-specific domain knowledge-informed feedback, and ML surrogate-based feedback. Analogous to how knowledge-informed machine learning improves the reliability of conventional data-driven models, our framework incorporates domain-informed text gradients to guide the generative process toward physically meaningful regions of the DP composition space. Systematic comparison of three incremental configurations, (i) pure LLM generation, (ii) LLM generation with LLM reasoning-based feedback, and (iii) LLM generation with domain knowledge-guided feedback, shows that iterative guidance from knowledge-informed gradients improves stability-condition satisfaction without additional training data, achieving over 98% compositional validity and up to 54% stable or metastable candidates, surpassing both the LLM-only baseline (43%) and prior GAN-based results (27%). Analyses of ML-based gradients further reveal that they enhance performance in in-distribution (ID) regions but become unreliable in out-of-distribution (OOD) regimes. Overall, this work provides the first systematic analysis of multi-agent, knowledge-guided text gradients for DP discovery and establishes a generalizable blueprint for MAS-driven generative materials design aimed at advancing sustainable technologies.