🤖 AI Summary
This study addresses a critical limitation in current wildfire risk index models, which often neglect false positive rates and thus fail to align with real-world operational decision-making needs. To bridge this gap, the authors propose a map-level evaluation framework explicitly designed for authentic decision contexts, systematically aligning model performance with operational objectives by jointly assessing fire detection capability and false alarm control. By integrating machine learning models enriched with high-resolution predictive features and employing spatially explicit validation techniques, the proposed framework substantially improves fire identification accuracy while effectively reducing false positive rates. This advancement enhances both the practical utility and reliability of wildfire risk forecasting systems, offering a more operationally relevant benchmark for model evaluation in fire management applications.
📝 Abstract
A growing body of literature has focused on predicting wildfire occurrence using machine learning methods, capitalizing on high-resolution data and fire predictors that canonical process-based frameworks largely ignore. Standard evaluation metrics for an ML classifier, while important, provide a potentially limited measure of the model's operational performance for the Fire Danger Index (FDI) forecast. Furthermore, model evaluation is frequently conducted without adequately accounting for false positive rates, despite their critical relevance in operational contexts. In this paper, we revisit the daily FDI model evaluation paradigm and propose a novel method for evaluating a forest fire forecasting model that is aligned with real-world decision-making. Furthermore, we systematically assess performance in accurately predicting fire activity and the false positives (false alarms). We further demonstrate that an ensemble of ML models improves both fire identification and reduces false positives.