XStacking: Explanation-Guided Stacked Ensemble Learning

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Stacking ensembles suffer from inherent opacity, limiting interpretability. To address this, we propose XStacking—a novel stacking framework that tightly integrates model-agnostic Shapley value attribution with dynamic feature transformation. Its core innovation lies in leveraging Shapley-based feature attribution to drive adaptive reconstruction of the feature space, enabling the meta-learner to achieve both high predictive performance and intrinsic interpretability—without relying on post-hoc explanation. XStacking is fully compatible with arbitrary base learners and requires no modification to underlying algorithms. Extensive experiments across 29 benchmark datasets demonstrate that XStacking consistently outperforms conventional stacking and state-of-the-art interpretable ensemble methods, achieving simultaneous gains in prediction accuracy and explanation fidelity. The framework is theoretically grounded and practically deployable.

Technology Category

Application Category

📝 Abstract
Ensemble Machine Learning (EML) techniques, especially stacking, have been shown to improve predictive performance by combining multiple base models. However, they are often criticized for their lack of interpretability. In this paper, we introduce XStacking, an effective and inherently explainable framework that addresses this limitation by integrating dynamic feature transformation with model-agnostic Shapley additive explanations. This enables stacked models to retain their predictive accuracy while becoming inherently explainable. We demonstrate the effectiveness of the framework on 29 datasets, achieving improvements in both the predictive effectiveness of the learning space and the interpretability of the resulting models. XStacking offers a practical and scalable solution for responsible ML.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability of stacked ensemble learning
Combining predictive accuracy with explainability in ML
Integrating dynamic feature transformation and Shapley explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic feature transformation for interpretability
Model-agnostic Shapley additive explanations integration
Ensemble learning with retained predictive accuracy
🔎 Similar Papers
No similar papers found.