Quantifying uncertainty and stability among highly correlated predictors: a subspace perspective

📅 2025-05-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the ambiguity in false-positive definition, instability, and poor interpretability of linear feature selection under high feature correlation, this paper proposes a novel subspace-level feature selection framework. It elevates error rate and stability definitions from individual features to feature subspaces and establishes subspace-stable selection theory. We introduce an interchangeable model identification and surrogate structure discovery paradigm, designing a stability-generalization algorithm based on subspace similarity and a surrogate structure detection method, implemented as the R package `substab`. Experiments on synthetic and real gene expression datasets demonstrate that our approach significantly improves cross-sampling stability and model interpretability, while explicitly identifying equivalent model sets under multicollinearity.

Technology Category

Application Category

📝 Abstract
We study the problem of linear feature selection when features are highly correlated. This setting presents two main challenges. First, how should false positives be defined? Intuitively, selecting a null feature that is highly correlated with a true one may be less problematic than selecting a completely uncorrelated null feature. Second, correlation among features can cause variable selection methods to produce very different feature sets across runs, making it hard to identify stable features. To address these issues, we propose a new framework based on feature subspaces -- the subspaces spanned by selected columns of the feature matrix. This framework leads to a new definition of false positives and negatives based on the"similarity"of feature subspaces. Further, instead of measuring stability of individual features, we measure stability with respect to feature subspaces. We propose and theoretically analyze a subspace generalization of stability selection (Meinshausen and Buhlmann, 2010). This procedure outputs multiple candidate stable models which can be considered interchangeable due to multicollinearity. We also propose a method for identifying substitute structures -- features that can be swapped and yield"equivalent"models. Finally, we demonstrate our framework and algorithms using both synthetic and real gene expression data. Our methods are implemented in the R package substab.
Problem

Research questions and friction points this paper is trying to address.

Defining false positives in highly correlated feature selection
Measuring stability of feature subspaces, not individual features
Identifying interchangeable features due to multicollinearity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses feature subspaces for false positives definition
Measures stability via feature subspaces similarity
Identifies substitute structures for equivalent models
🔎 Similar Papers
No similar papers found.