🤖 AI Summary
Existing interpretable load forecasting methods for smart grids lack systematic modeling of multivariate data, limiting transparency and trustworthiness. Method: This paper proposes an energy-scenario-oriented classification framework for heterogeneous multi-source data, unifying metadata, domain-specific features, contextual features, and behavioral features into a single interpretable modeling paradigm. Leveraging public datasets—including UKDALE, REFIT, and ECO—we integrate interpretable machine learning techniques (Lasso, decision trees, XGBoost, and SHAP) to quantitatively analyze the impact of each feature category on prediction performance. Contribution/Results: Experimental results demonstrate that the proposed framework reduces average mean absolute error (MAE) by 12.7% compared to baseline models. Moreover, it enables feature-level attribution explanations, significantly enhancing model interpretability, stakeholder trust, and practical deployment capability of AI in power systems.
📝 Abstract
The transition from traditional power grids to smart grids, significant increase in the use of renewable energy sources, and soaring electricity prices has triggered a digital transformation of the energy infrastructure that enables new, data driven, applications often supported by machine learning models. However, the majority of the developed machine learning models rely on univariate data. To date, a structured study considering the role meta-data and additional measurements resulting in multivariate data is missing. In this paper we propose a taxonomy that identifies and structures various types of data related to energy applications. The taxonomy can be used to guide application specific data model development for training machine learning models. Focusing on a household electricity forecasting application, we validate the effectiveness of the proposed taxonomy in guiding the selection of the features for various types of models. As such, we study of the effect of domain, contextual and behavioral features on the forecasting accuracy of four interpretable machine learning techniques and three openly available datasets. Finally, using a feature importance techniques, we explain individual feature contributions to the forecasting accuracy.