FloodVision: Urban Flood Depth Estimation Using Foundation Vision-Language Models and Domain Knowledge Graph

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing urban flood depth estimation methods rely on task-specific detectors and custom training, resulting in poor generalizability and limited accuracy. This paper proposes the first zero-shot flood depth estimation framework, which innovatively integrates a vision-language model (GPT-4o) with a structured hydraulic knowledge graph. Leveraging real-world object dimension priors, it enables knowledge-guided cross-scene reasoning without fine-tuning, effectively suppressing hallucination. The method comprises dynamic reference-object detection, knowledge-graph querying, submergence-ratio modeling, and statistical anomaly filtering. Evaluated on 110 real-world flood images, it achieves a mean absolute error of 8.17 cm—outperforming baselines by 20.5% and significantly surpassing CNN-based approaches. The framework delivers near-real-time inference while maintaining strong generalizability across diverse urban flooding scenarios.

Technology Category

Application Category

📝 Abstract
Timely and accurate floodwater depth estimation is critical for road accessibility and emergency response. While recent computer vision methods have enabled flood detection, they suffer from both accuracy limitations and poor generalization due to dependence on fixed object detectors and task-specific training. To enable accurate depth estimation that can generalize across diverse flood scenarios, this paper presents FloodVision, a zero-shot framework that combines the semantic reasoning abilities of the foundation vision-language model GPT-4o with a structured domain knowledge graph. The knowledge graph encodes canonical real-world dimensions for common urban objects including vehicles, people, and infrastructure elements to ground the model's reasoning in physical reality. FloodVision dynamically identifies visible reference objects in RGB images, retrieves verified heights from the knowledge graph to mitigate hallucination, estimates submergence ratios, and applies statistical outlier filtering to compute final depth values. Evaluated on 110 crowdsourced images from MyCoast New York, FloodVision achieves a mean absolute error of 8.17 cm, reducing the GPT-4o baseline 10.28 cm by 20.5% and surpassing prior CNN-based methods. The system generalizes well across varying scenes and operates in near real-time, making it suitable for future integration into digital twin platforms and citizen-reporting apps for smart city flood resilience.
Problem

Research questions and friction points this paper is trying to address.

Estimating urban flood depth accurately across diverse scenarios
Overcoming limitations of fixed object detectors and poor generalization
Combining vision-language models with domain knowledge for zero-shot estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines vision-language model with domain knowledge graph
Uses verified object heights to mitigate AI hallucination
Applies statistical filtering for accurate depth estimation
🔎 Similar Papers
No similar papers found.