🤖 AI Summary
Current foundational weather science models lack language reasoning capabilities, while large language models (LLMs) struggle with high-dimensional meteorological data—neither adequately supports interactive scientific analysis. To address this gap, we propose Zephyrus, the first language-driven intelligent agent framework tailored for atmospheric science. Our method introduces ZephyrusWorld, an interactive simulation environment integrating LLMs, a Python execution engine, the WeatherBench 2 data interface, geospatial natural language querying, and climate modeling tools; designs a multi-turn dialogue-based meteorological agent supporting forecasting, extreme-event detection, and counterfactual reasoning; and releases ZephyrusBench, a standardized evaluation benchmark. Experiments demonstrate that Zephyrus significantly outperforms text-only baselines across multiple tasks, achieving up to a 35-percentage-point improvement in accuracy. This work establishes a new paradigm for meteorological AI agents and provides a scalable, extensible infrastructure for future research.
📝 Abstract
Foundation models for weather science are pre-trained on vast amounts of structured numerical data and outperform traditional weather forecasting systems. However, these models lack language-based reasoning capabilities, limiting their utility in interactive scientific workflows. Large language models (LLMs) excel at understanding and generating text but cannot reason about high-dimensional meteorological datasets. We bridge this gap by building a novel agentic framework for weather science. Our framework includes a Python code-based environment for agents (ZephyrusWorld) to interact with weather data, featuring tools like an interface to WeatherBench 2 dataset, geoquerying for geographical masks from natural language, weather forecasting, and climate simulation capabilities. We design Zephyrus, a multi-turn LLM-based weather agent that iteratively analyzes weather datasets, observes results, and refines its approach through conversational feedback loops. We accompany the agent with a new benchmark, ZephyrusBench, with a scalable data generation pipeline that constructs diverse question-answer pairs across weather-related tasks, from basic lookups to advanced forecasting, extreme event detection, and counterfactual reasoning. Experiments on this benchmark demonstrate the strong performance of Zephyrus agents over text-only baselines, outperforming them by up to 35 percentage points in correctness. However, on harder tasks, Zephyrus performs similarly to text-only baselines, highlighting the challenging nature of our benchmark and suggesting promising directions for future work.