🤖 AI Summary
High-quality instruction data for climate change research remains scarce, hindering the development of domain-specific large language models (LLMs). To address this, we propose the first end-to-end framework for automatically generating climate science–oriented instruction data—ClimateChat-Corpus—by integrating authoritative climate documents, structured knowledge bases, and web-crawled seed instructions. Leveraging this corpus, we conduct supervised fine-tuning on open-source LLMs (e.g., Llama-3, Qwen) to develop ClimateChat, a specialized question-answering model for climate science. Our contributions include: (1) a multi-source knowledge–driven instruction generation paradigm; (2) a co-optimization mechanism between base models and instruction data; and (3) the first fine-grained, climate-specific evaluation benchmark. Experiments demonstrate that ClimateChat significantly outperforms general-purpose LLMs across diverse climate QA and scientific discovery tasks. Moreover, our methodology provides a reproducible, scalable framework for constructing domain-specific instruction datasets, accompanied by empirical guidelines for climate AI development.
📝 Abstract
As the issue of global climate change becomes increasingly severe, the demand for research in climate science continues to grow. Natural language processing technologies, represented by Large Language Models (LLMs), have been widely applied to climate change-specific research, providing essential information support for decision-makers and the public. Some studies have improved model performance on relevant tasks by constructing climate change-related instruction data and instruction-tuning LLMs. However, current research remains inadequate in efficiently producing large volumes of high-precision instruction data for climate change, which limits further development of climate change LLMs. This study introduces an automated method for constructing instruction data. The method generates instructions using facts and background knowledge from documents and enhances the diversity of the instruction data through web scraping and the collection of seed instructions. Using this method, we constructed a climate change instruction dataset, named ClimateChat-Corpus, which was used to fine-tune open-source LLMs, resulting in an LLM named ClimateChat. Evaluation results show that ClimateChat significantly improves performance on climate change question-and-answer tasks. Additionally, we evaluated the impact of different base models and instruction data on LLM performance and demonstrated its capability to adapt to a wide range of climate change scientific discovery tasks, emphasizing the importance of selecting an appropriate base model for instruction tuning. This research provides valuable references and empirical support for constructing climate change instruction data and training climate change-specific LLMs.