🤖 AI Summary
This work proposes an unbiased rotation method within a latent variable representation learning framework to overcome the irreducible bias introduced by conventional rotation approaches in sparse representation learning, which hinders valid statistical inference. The proposed method simultaneously achieves sparsity, interpretability, and statistical validity, and for the first time establishes oracle inference properties for rotated sparse representations—enabling estimators to attain the ideal asymptotic variance achievable when latent variables are observable. By integrating an unbiased rotation algorithm grounded in latent variable modeling, an efficient computational framework, and rigorous asymptotic theory, the approach facilitates reliable confidence interval construction and hypothesis testing while preserving representation parsimony.
📝 Abstract
Learning low-dimensional latent representations is a central topic in statistics and machine learning, and rotation methods have long been used to obtain sparse and interpretable representations. Despite nearly a century of widespread use across many fields, rigorous guarantees for valid inference for the learned representation remain lacking. In this paper, we identify a surprisingly prevalent phenomenon that suggests a reason for this gap: for a broad class of vintage rotations, the resulting estimators exhibit a non-estimable bias. Because this bias is independent of the data, it fundamentally precludes the development of valid inferential procedures, including the construction of confidence intervals and hypothesis testing. To address this challenge, we propose a novel bias-free rotation method within a general representation learning framework based on latent variables. We establish an oracle inference property for the learned sparse representations: the estimators achieve the same asymptotic variance as in the ideal setting where the latent variables are observed. To bridge the gap between theory and computation, we develop an efficient computational framework and prove that its output estimators retain the same oracle property. Our results provide a rigorous inference procedure for the rotated estimators, yielding statistically valid and interpretable representation learning.