ClimaQA: An Automated Evaluation Framework for Climate Question Answering Models

📅 2024-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack systematic, scientifically rigorous evaluation of their outputs in climate science. To address this gap, we propose ClimaEval—the first automated question-answering evaluation framework specifically designed for climate science. Our method introduces (1) ClimaGen, an adaptive, scientist-in-the-loop question-generation framework; (2) a dual-track benchmark comprising ClimaQA-Gold—an expert-annotated high-fidelity dataset—and ClimaQA-Silver—a large-scale synthetically generated dataset; and (3) a multidimensional scientificity evaluation metric covering factual accuracy, logical coherence, and explanatory adequacy, grounded in structured knowledge extraction from graduate-level climate science textbooks and iteratively refined via expert feedback. We conduct comprehensive experiments on mainstream LLMs, revealing divergent impacts of various knowledge-enhancement strategies on climate QA performance. All code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
The use of Large Language Models (LLMs) in climate science has recently gained significant attention. However, a critical issue remains: the lack of a comprehensive evaluation framework capable of assessing the quality and scientific validity of model outputs. To address this issue, we develop ClimaGen (Climate QA Generator), an adaptive learning framework that generates question-answer pairs from graduate textbooks with climate scientists in the loop. As a result, we present ClimaQA-Gold, an expert-annotated benchmark dataset alongside ClimaQA-Silver, a large-scale, comprehensive synthetic QA dataset for climate science. Finally, we develop evaluation strategies and compare different LLMs on our benchmarks. Our results offer novel insights into various approaches used to enhance knowledge of climate LLMs. The source code is publicly available at https://github.com/Rose-STL-Lab/genie-climaqa
Problem

Research questions and friction points this paper is trying to address.

Lack of evaluation framework for climate QA models.
Need for expert-annotated and synthetic QA datasets.
Comparison of LLMs to enhance climate science knowledge.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed ClimaGen for adaptive QA generation
Created expert-annotated ClimaQA-Gold dataset
Introduced ClimaQA-Silver synthetic QA dataset
🔎 Similar Papers
No similar papers found.