🤖 AI Summary
Traditional feature importance methods yield only scalar scores, failing to characterize the mutual information between features and the target, the contribution of feature pairs to synergistic information, or the degree of redundancy among features. To address this limitation, we propose PIDF—a novel framework that systematically introduces Partial Information Decomposition (PID) into feature analysis for the first time. PIDF jointly models three PID components: unique information of individual features, synergistic information contributed by feature pairs, and redundant information shared across features—thereby unifying interpretability and feature selection. The method integrates robust mutual information estimation with principled quantification of synergy and redundancy. Extensive evaluation on synthetic benchmarks and real-world applications in genetics and neuroscience demonstrates that PIDF not only accurately identifies critical biomarkers but also uncovers complex feature interaction patterns, significantly enhancing both explanatory depth and selection robustness.
📝 Abstract
In this paper, we introduce Partial Information Decomposition of Features (PIDF), a new paradigm for simultaneous data interpretability and feature selection. Contrary to traditional methods that assign a single importance value, our approach is based on three metrics per feature: the mutual information shared with the target variable, the feature's contribution to synergistic information, and the amount of this information that is redundant. In particular, we develop a novel procedure based on these three metrics, which reveals not only how features are correlated with the target but also the additional and overlapping information provided by considering them in combination with other features. We extensively evaluate PIDF using both synthetic and real-world data, demonstrating its potential applications and effectiveness, by considering case studies from genetics and neuroscience.