🤖 AI Summary
This work addresses the challenge of learning robust latent representations from high-dimensional discrete data—such as electronic health records—when the sample size is substantially smaller than the feature dimensionality. The authors propose a semantic-guided latent representation learning framework that, for the first time, integrates external semantic embeddings into a latent projection model via a smooth mapping in a reproducing kernel Hilbert space, thereby regularizing the learning process through alignment between column-wise and semantic embeddings. By leveraging kernel principal component analysis to construct a semantic subspace and designing a scalable projected gradient descent algorithm, the method enables efficient two-stage estimation. Theoretical analysis provides local convergence guarantees for the non-convex optimization, and experiments demonstrate significant improvements in both robustness and accuracy of learned representations on simulated and real-world electronic health record data.
📝 Abstract
Latent space models are widely used for analyzing high-dimensional discrete data matrices, such as patient-feature matrices in electronic health records (EHRs), by capturing complex dependence structures through low-dimensional embeddings. However, estimation becomes challenging in the imbalanced regime, where one matrix dimension is much larger than the other. In EHR applications, cohort sizes are often limited by disease prevalence or data availability, whereas the feature space remains extremely large due to the breadth of medical coding system. Motivated by the increasing availability of external semantic embeddings, such as pre-trained embeddings of clinical concepts in EHRs, we propose a knowledge-embedded latent projection model that leverages semantic side information to regularize representation learning. Specifically, we model column embeddings as smooth functions of semantic embeddings via a mapping in a reproducing kernel Hilbert space. We develop a computationally efficient two-step estimation procedure that combines semantically guided subspace construction via kernel principal component analysis with scalable projected gradient descent. We establish estimation error bounds that characterize the trade-off between statistical error and approximation error induced by the kernel projection. Furthermore, we provide local convergence guarantees for our non-convex optimization procedure. Extensive simulation studies and a real-world EHR application demonstrate the effectiveness of the proposed method.