🤖 AI Summary
In high-dimensional biological data, sparse and nonlinear feature interactions—such as motif co-occurrences—are often obscured by noise, impeding reliable detection of key biological signals; moreover, existing methods struggle to balance interpretability with expressive modeling capacity. This paper proposes an interpretable probabilistic interaction modeling framework based on Bayesian binary regression. It introduces a grouped global–local shrinkage prior to jointly suppress noise and preserve sparse signals, and designs a partially factorized variational inference algorithm that retains posterior skewness modeling capability while improving computational efficiency. Experiments demonstrate that our method achieves significantly higher interaction detection accuracy than state-of-the-art approaches on synthetic benchmarks, attains inference speedups of over an order of magnitude compared to MCMC, and successfully uncovers biologically meaningful interactions from deep learning attribution scores—thereby bridging interpretability and high-fidelity interaction modeling.
📝 Abstract
Biological data sets are often high-dimensional, noisy, and governed by complex interactions among sparse signals. This poses major challenges for interpretability and reliable feature selection. Tasks such as identifying motif interactions in genomics exemplify these difficulties, as only a small subset of biologically relevant features (e.g., motifs) are typically active, and their effects are often non-linear and context-dependent. While statistical approaches often result in more interpretable models, deep learning models have proven effective in modeling complex interactions and prediction accuracy, yet their black-box nature limits interpretability. We introduce BaGGLS, a flexible and interpretable probabilistic binary regression model designed for high-dimensional biological inference involving feature interactions. BaGGLS incorporates a Bayesian group global-local shrinkage prior, aligned with the group structure introduced by interaction terms. This prior encourages sparsity while retaining interpretability, helping to isolate meaningful signals and suppress noise. To enable scalable inference, we employ a partially factorized variational approximation that captures posterior skewness and supports efficient learning even in large feature spaces. In extensive simulations, we can show that BaGGLS outperforms the other methods with regard to interaction detection and is many times faster than MCMC sampling under the horseshoe prior. We also demonstrate the usefulness of BaGGLS in the context of interaction discovery from motif scanner outputs and noisy attribution scores from deep learning models. This shows that BaGGLS is a promising approach for uncovering biologically relevant interaction patterns, with potential applicability across a range of high-dimensional tasks in computational biology.