Sparse PCA With Multiple Components

📅 2022-09-29
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing challenge in high-dimensional sparse principal component analysis (SPCA) of simultaneously ensuring orthogonality and statistical optimality across multiple components. Unlike conventional sequential extraction and deflation strategies—which inherently violate orthogonality—we propose the first unified optimization framework for jointly learning multiple sparse principal components. Our method explicitly enforces orthogonality via a rank constraint on the component matrix and jointly optimizes sparsity and low-rank structure. Key technical contributions include: (i) the first rank-constrained orthogonal modeling formulation for SPCA; (ii) a tight semidefinite relaxation enhanced with second-order cone constraints; (iii) a novel combinatorial upper bound on explained variance derived via support-set enumeration; and (iv) a certifiably near-optimal solver achieving provable optimality gaps of 0%–15%. Experiments on real-world datasets with $p = 100$–$1000$ features and $r = 2$–$3$ components yield strictly orthogonal, sparse loadings while matching or exceeding state-of-the-art variance explanation—eliminating orthogonality violations entirely.
📝 Abstract
Sparse Principal Component Analysis (sPCA) is a cardinal technique for obtaining combinations of features, or principal components (PCs), that explain the variance of high-dimensional datasets in an interpretable manner. This involves solving a sparsity and orthogonality constrained convex maximization problem, which is extremely computationally challenging. Most existing works address sparse PCA via methods-such as iteratively computing one sparse PC and deflating the covariance matrix-that do not guarantee the orthogonality, let alone the optimality, of the resulting solution when we seek multiple mutually orthogonal PCs. We challenge this status by reformulating the orthogonality conditions as rank constraints and optimizing over the sparsity and rank constraints simultaneously. We design tight semidefinite relaxations to supply high-quality upper bounds, which we strengthen via additional second-order cone inequalities when each PC's individual sparsity is specified. Further, we derive a combinatorial upper bound on the maximum amount of variance explained as a function of the support. We exploit these relaxations and bounds to propose exact methods and rounding mechanisms that, together, obtain solutions with a bound gap on the order of 0%-15% for real-world datasets with p = 100s or 1000s of features and r in {2, 3} components. Numerically, our algorithms match (and sometimes surpass) the best performing methods in terms of fraction of variance explained and systematically return PCs that are sparse and orthogonal. In contrast, we find that existing methods like deflation return solutions that violate the orthogonality constraints, even when the data is generated according to sparse orthogonal PCs. Altogether, our approach solves sparse PCA problems with multiple components to certifiable (near) optimality in a practically tractable fashion.
Problem

Research questions and friction points this paper is trying to address.

Sparse PCA lacks guaranteed orthogonality for multiple components
Existing methods fail to ensure optimality and sparsity simultaneously
New method achieves near-optimal sparse orthogonal PCs efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reformulates orthogonality as rank constraints
Uses semidefinite relaxations for tight bounds
Proposes exact methods with rounding mechanisms
Ryan Cory-Wright
Ryan Cory-Wright
Imperial Business School
Operations ResearchOptimizationMachine LearningAnalyticsElectricity Markets
J
J. Pauphilet
Management Science and Operations, London Business School, London, UK