🤖 AI Summary
The “black-box” nature of machine learning models hinders their trustworthy deployment in high-stakes domains such as healthcare and finance. Existing interpretability methods predominantly assess univariate feature importance and lack robustness in detecting non-additive feature interactions, while also failing to provide statistical error control. To address this, we propose the first framework for discovering feature interactions in black-box models that rigorously controls the false discovery rate (FDR) via Model-X knockoffs. Our approach integrates non-additive interaction importance distillation with permutation-based significance testing, ensuring compatibility with diverse architectures—including deep neural networks and tree-based models. Extensive evaluations on synthetic benchmarks and multiple biomedical datasets demonstrate that our method substantially reduces false positive rates, improves interaction detection accuracy, and enhances the statistical reliability of scientific hypothesis generation.
📝 Abstract
Machine learning (ML) models are powerful tools for detecting complex patterns within data, yet their"black box"nature limits their interpretability, hindering their use in critical domains like healthcare and finance. To address this challenge, interpretable ML methods have been developed to explain how features influence model predictions. However, these methods often focus on univariate feature importance, overlooking the complex interactions between features that ML models are capable of capturing. Recognizing this limitation, recent efforts have aimed to extend these methods to discover feature interactions, but existing approaches struggle with robustness and error control, especially under data perturbations. In this study, we introduce Diamond, a novel method for trustworthy feature interaction discovery. Diamond uniquely integrates the model-X knockoffs framework to control the false discovery rate (FDR), ensuring that the proportion of falsely discovered interactions remains low. A key innovation in Diamond is its non-additivity distillation procedure, which refines existing interaction importance measures to distill non-additive interaction effects, ensuring that FDR control is maintained. This approach addresses the limitations of off-the-shelf interaction measures, which, when used naively, can lead to inaccurate discoveries. Diamond's applicability spans a wide range of ML models, including deep neural networks, transformer models, tree-based models, and factorization-based models. Our empirical evaluations on both simulated and real datasets across various biomedical studies demonstrate Diamond's utility in enabling more reliable data-driven scientific discoveries. This method represents a significant step forward in the deployment of ML models for scientific innovation and hypothesis generation.