🤖 AI Summary
Large language models (LLMs) lack domain-specific evaluation benchmarks for hydrology and water resources engineering (Hydro-SE), hindering rigorous assessment of their domain expertise. Method: We introduce Hydro-SE Bench—the first comprehensive, expert-curated benchmark for Hydro-SE—comprising 4,000 multiple-choice questions across nine subdomains, systematically evaluating foundational knowledge, engineering application, and computational reasoning. The benchmark integrates natural/physical science principles, engineering practice, and interdisciplinary reasoning dimensions, and evaluates both leading commercial LLMs and small-parameter open-source models. Results: Commercial models achieve significantly higher accuracy (74–80%) than open-source counterparts (41–68%), yet all models exhibit consistent weaknesses in industry standards compliance and hydraulic structure analysis. This work fills a critical gap by establishing the first standardized Hydro-SE evaluation framework, precisely identifying capability bottlenecks, and providing a quantifiable foundation for domain-adapted model development and real-world deployment.
📝 Abstract
Hydro-Science and Engineering (Hydro-SE) is a critical and irreplaceable domain that secures human water supply, generates clean hydropower energy, and mitigates flood and drought disasters. Featuring multiple engineering objectives, Hydro-SE is an inherently interdisciplinary domain that integrates scientific knowledge with engineering expertise. This integration necessitates extensive expert collaboration in decision-making, which poses difficulties for intelligence. With the rapid advancement of large language models (LLMs), their potential application in the Hydro-SE domain is being increasingly explored. However, the knowledge and application abilities of LLMs in Hydro-SE have not been sufficiently evaluated. To address this issue, we propose the Hydro-SE LLM evaluation benchmark (Hydro-SE Bench), which contains 4,000 multiple-choice questions. Hydro-SE Bench covers nine subfields and enables evaluation of LLMs in aspects of basic conceptual knowledge, engineering application ability, and reasoning and calculation ability. The evaluation results on Hydro-SE Bench show that the accuracy values vary among 0.74 to 0.80 for commercial LLMs, and among 0.41 to 0.68 for small-parameter LLMs. While LLMs perform well in subfields closely related to natural and physical sciences, they struggle with domain-specific knowledge such as industry standards and hydraulic structures. Model scaling mainly improves reasoning and calculation abilities, but there is still great potential for LLMs to better handle problems in practical engineering application. This study highlights the strengths and weaknesses of LLMs for Hydro-SE tasks, providing model developers with clear training targets and Hydro-SE researchers with practical guidance for applying LLMs.