🤖 AI Summary
This work addresses the significant performance limitations of existing representation learning methods in multi-source data fusion, where block-wise missingness and signal heterogeneity pose major challenges. To jointly tackle these issues, the authors propose the Anchor-Projected Principal Component Analysis (APPCA) framework, which first recovers robust column spaces for each data modality and then aligns their subspaces while denoising through shared anchor features before performing PCA. The method innovatively integrates an anchor projection mechanism with spectral slicing perturbation analysis to establish a tight reconstruction error bound that does not depend on per-sample signal strength. Experiments on both simulated and real multimodal single-cell sequencing data demonstrate that APPCA substantially outperforms current approaches, exhibiting both theoretical rigor and practical efficacy.
📝 Abstract
Unified representation learning for multi-source data integration faces two important challenges: blockwise missingness and blockwise signal heterogeneity. The former arises from sources observing different, yet potentially overlapping, feature sets, while the latter involves varying signal strengths across subject groups and feature sets. While existing methods perform well with fully observed data or uniform signal strength, their performance degenerates when these two challenges coincide, which is common in practice. To address this, we propose Anchor Projected Principal Component Analysis (APPCA), a general framework for representation learning with structured blockwise missingness that is robust to signal heterogeneity. APPCA first recovers robust group-specific column spaces using all observed feature sets, and then aligns them by projecting shared"anchor"features onto these subspaces before performing PCA. This projection step induces a significant denoising effect. We establish estimation error bounds for embedding reconstruction through a fine-grained perturbation analysis. In particular, using a novel spectral slicing technique, our bound eliminates the standard dependency on the signal strength of subject embeddings, relying instead solely on the signal strength of integrated feature sets. We validate the proposed method through extensive simulation studies and an application to multimodal single-cell sequencing data.