Data Model Design for Explainable Machine Learning-based Electricity Applications

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing interpretable load forecasting methods for smart grids lack systematic modeling of multivariate data, limiting transparency and trustworthiness. Method: This paper proposes an energy-scenario-oriented classification framework for heterogeneous multi-source data, unifying metadata, domain-specific features, contextual features, and behavioral features into a single interpretable modeling paradigm. Leveraging public datasets—including UKDALE, REFIT, and ECO—we integrate interpretable machine learning techniques (Lasso, decision trees, XGBoost, and SHAP) to quantitatively analyze the impact of each feature category on prediction performance. Contribution/Results: Experimental results demonstrate that the proposed framework reduces average mean absolute error (MAE) by 12.7% compared to baseline models. Moreover, it enables feature-level attribution explanations, significantly enhancing model interpretability, stakeholder trust, and practical deployment capability of AI in power systems.

Technology Category

Application Category

📝 Abstract
The transition from traditional power grids to smart grids, significant increase in the use of renewable energy sources, and soaring electricity prices has triggered a digital transformation of the energy infrastructure that enables new, data driven, applications often supported by machine learning models. However, the majority of the developed machine learning models rely on univariate data. To date, a structured study considering the role meta-data and additional measurements resulting in multivariate data is missing. In this paper we propose a taxonomy that identifies and structures various types of data related to energy applications. The taxonomy can be used to guide application specific data model development for training machine learning models. Focusing on a household electricity forecasting application, we validate the effectiveness of the proposed taxonomy in guiding the selection of the features for various types of models. As such, we study of the effect of domain, contextual and behavioral features on the forecasting accuracy of four interpretable machine learning techniques and three openly available datasets. Finally, using a feature importance techniques, we explain individual feature contributions to the forecasting accuracy.
Problem

Research questions and friction points this paper is trying to address.

Designing data models for explainable ML in electricity applications
Addressing lack of structured study on multivariate energy data
Evaluating feature impact on household electricity forecasting accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes taxonomy for energy data structuring
Validates taxonomy with household electricity forecasting
Explains feature contributions using importance techniques
🔎 Similar Papers
No similar papers found.
Carolina Fortuna
Carolina Fortuna
Jozef Stefan Institute
artificial intelligencecyber-physical systems
Gregor Cerar
Gregor Cerar
Jozef Stefan Institute, SensorLab
wirelessAIMLMLOps
B
Blaž Bertalanič
Joˇzef Stefan Institute, Slovenia
A
Andrej Čampa
Joˇzef Stefan Institute, Slovenia
M
M. Mohorčič
Joˇzef Stefan Institute, Slovenia