Classification of high-dimensional data with spiked covariance matrix structure

📅 2021-10-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
We address binary classification under high-dimensional sparse mean differences and spiked covariance structures—where the covariance matrix exhibits a few large eigenvalues and many small, densely packed ones. We propose an adaptive framework comprising whitening via the sample covariance’s inverse square root, followed by sparse feature screening based on the top-$s$ coordinates of the whitened mean-difference vector, and finally Fisher’s linear discriminant in the reduced space. Our theoretical contribution is the first incorporation of entrywise matrix perturbation bounds for spiked covariance models into classification analysis, enabling attainment of the Bayes-optimal rate asymptotically (as $n o infty$ with $ssqrt{(log p)/n} o 0$) without requiring prior knowledge of the sparsity level $s$. Empirically, our method achieves accuracy competitive with state-of-the-art approaches while substantially improving feature selection compactness.
📝 Abstract
We study the classification problem for high-dimensional data with $n$ observations on $p$ features where the $p imes p$ covariance matrix $Sigma$ exhibits a spiked eigenvalues structure and the vector $zeta$, given by the difference between the whitened mean vectors, is sparse with sparsity at most $s$. We propose an adaptive classifier (adaptive with respect to the sparsity $s$) that first performs dimension reduction on the feature vectors prior to classification in the dimensionally reduced space, i.e., the classifier whitened the data, then screen the features by keeping only those corresponding to the $s$ largest coordinates of $zeta$ and finally apply Fisher linear discriminant on the selected features. Leveraging recent results on entrywise matrix perturbation bounds for covariance matrices, we show that the resulting classifier is Bayes optimal whenever $n ightarrow infty$ and $s sqrt{n^{-1} ln p} ightarrow 0$. Experimental results on real and synthetic data sets indicate that the proposed classifier is competitive with existing state-of-the-art methods while also selecting a smaller number of features.
Problem

Research questions and friction points this paper is trying to address.

Classify high-dimensional data with spiked covariance structure
Develop adaptive classifier for sparse whitened mean differences
Ensure Bayes optimality under specific asymptotic conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive classifier for high-dimensional data
Whitening and sparse feature screening
Fisher discriminant on selected features
🔎 Similar Papers
No similar papers found.
Y
Yin-Jen Chen
Department of Statistics, North Carolina State University
Minh Tang
Minh Tang
North Carolina State University
graph inferencedimension reduction