AquaCast: Urban Water Dynamics Forecasting with Precipitation-Informed Multi-Input Transformer

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Urban hydrodynamic forecasting faces challenges in effectively fusing endogenous variables (e.g., water level, flow rate) with exogenous factors (e.g., historical and forecasted precipitation) and suffers from low modeling efficiency. To address this, we propose a Transformer-based multi-input multi-output deep learning model. Our key innovation lies in introducing an embedding layer that directly encodes external variables—bypassing separate prediction of exogenous inputs—and thereby captures cross-dimensional dependencies among heterogeneous time series. Leveraging self-attention mechanisms, the model jointly processes multi-source temporal data. Evaluated on the real-world LausanneCity dataset and three large-scale synthetic benchmarks, it consistently outperforms state-of-the-art methods in short-term urban flood risk prediction. The model demonstrates strong robustness to input perturbations and scalability to varying data scales and variable dimensions, enabling high-accuracy, operationally viable forecasting for smart city water management.

Technology Category

Application Category

📝 Abstract
This work addresses the challenge of forecasting urban water dynamics by developing a multi-input, multi-output deep learning model that incorporates both endogenous variables (e.g., water height or discharge) and exogenous factors (e.g., precipitation history and forecast reports). Unlike conventional forecasting, the proposed model, AquaCast, captures both inter-variable and temporal dependencies across all inputs, while focusing forecast solely on endogenous variables. Exogenous inputs are fused via an embedding layer, eliminating the need to forecast them and enabling the model to attend to their short-term influences more effectively. We evaluate our approach on the LausanneCity dataset, which includes measurements from four urban drainage sensors, and demonstrate state-of-the-art performance when using only endogenous variables. Performance also improves with the inclusion of exogenous variables and forecast reports. To assess generalization and scalability, we additionally test the model on three large-scale synthesized datasets, generated from MeteoSwiss records, the Lorenz Attractors model, and the Random Fields model, each representing a different level of temporal complexity across 100 nodes. The results confirm that our model consistently outperforms existing baselines and maintains a robust and accurate forecast across both real and synthetic datasets.
Problem

Research questions and friction points this paper is trying to address.

Forecasting urban water dynamics using multi-input deep learning
Incorporating precipitation history and forecast reports
Capturing inter-variable and temporal dependencies effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-input transformer model with precipitation embedding
Captures inter-variable and temporal dependencies simultaneously
Eliminates exogenous forecasting via embedding layer fusion
🔎 Similar Papers
No similar papers found.