WeQA: A Benchmark for Retrieval Augmented Generation in Wind Energy Domain

📅 2024-08-21
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Evaluating wind energy projects requires integrating multi-domain information across hundreds of heterogeneous technical documents, a process traditionally hampered by low efficiency and high expertise barriers. Moreover, no standardized benchmark exists to rigorously evaluate Retrieval-Augmented Generation (RAG) models in the wind energy domain. Method: We introduce WeQA—the first vertical-domain benchmark for wind energy—featuring an expert-AI collaborative framework for automated, realistic question generation grounded in authentic documents (e.g., environmental impact reports) and multi-level complex queries. We design multi-granularity evaluation metrics covering retrieval accuracy, answer faithfulness, and reasoning capability. Contribution/Results: We conduct systematic evaluations of mainstream RAG models on WeQA, uncovering critical domain adaptation limitations. WeQA provides a reproducible, comparable evaluation infrastructure to advance RAG deployment in wind energy decision-making and sets a foundation for domain-specific RAG benchmarking.

Technology Category

Application Category

📝 Abstract
In the rapidly evolving landscape of Natural Language Processing (NLP) and text generation, the emergence of Retrieval Augmented Generation (RAG) presents a promising avenue for improving the quality and reliability of generated text by leveraging information retrieved from user specified database. Benchmarking is essential to evaluate and compare the performance of the different RAG configurations in terms of retriever and generator, providing insights into their effectiveness, scalability, and suitability for the specific domain and applications. In this paper, we present a comprehensive framework to generate a domain relevant RAG benchmark. Our framework is based on automatic question-answer generation with Human (domain experts)-AI Large Language Model (LLM) teaming. As a case study, we demonstrate the framework by introducing WeQA, a first-of-its-kind benchmark on the wind energy domain which comprises of multiple scientific documents/reports related to environmental impact of wind energy projects. Our framework systematically evaluates RAG performance using diverse metrics and multiple question types with varying complexity level. We also demonstrate the performance of different models on our benchmark.
Problem

Research questions and friction points this paper is trying to address.

Challenges in synthesizing wind energy project documentation
Need for benchmarking RAG-based LLMs in complex domains
Developing a domain-specific RAG evaluation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic question-answer generation with Human-AI teaming
Comprehensive framework for domain-relevant RAG benchmark
Diverse metrics and multiple question types evaluation
🔎 Similar Papers
No similar papers found.
R
Rounak Meyur
Pacific Northwest National Laboratory
Hung Phan
Hung Phan
PhD Student in Computer Science at Iowa State University
Machine TranslationSoftware Engineering
S
S. Wagle
Pacific Northwest National Laboratory
Jan Strube
Jan Strube
Pacific Northwest National Laboratory
M
M. Halappanavar
Pacific Northwest National Laboratory
S
Sameera Horawalavithana
Pacific Northwest National Laboratory
Anurag Acharya
Anurag Acharya
Google Inc
scholarly communicationsearch engines
S
Sai Munikoti
Pacific Northwest National Laboratory