🤖 AI Summary
Learning high-quality dense representations from ultra-large-scale sparse co-occurrence data (e.g., user-item interactions, word-pair co-occurrences) remains challenging in the absence of explicit covariates. Such data exhibit zero-inflation and continuous-valued weights, which standard models fail to capture adequately.
Method: We propose the Shared-Parameter Alternating Tweedie (SPAT) model—the first to employ the Tweedie distribution for co-occurrence data with both excess zeros and continuous nonnegative counts. SPAT integrates an inner-loop Fisher scoring update with an adaptive learning rate mechanism, enabling efficient optimization without explicit covariate modeling.
Contribution/Results: We theoretically disprove the validity of pseudo-likelihood estimation for latent-covariate regression in this setting. SPAT is memory-efficient and scales to billion-entry matrices. Empirically, it significantly outperforms standard Fisher scoring and Adam on both synthetic and real-world benchmarks—achieving more stable convergence and superior representation quality.
📝 Abstract
In this article, we present a model for analyzing the cooccurrence count data derived from practical fields such as user-item or item-item data from online shopping platform, cooccurring word-word pairs in sequences of texts. Such data contain important information for developing recommender systems or studying relevance of items or words from non-numerical sources. Different from traditional regression models, there are no observations for covariates. Additionally, the cooccurrence matrix is typically of so high dimension that it does not fit into a computer's memory for modeling. We extract numerical data by defining windows of cooccurrence using weighted count on the continuous scale. Positive probability mass is allowed for zero observations. We present Shared parameter Alternating Tweedie (SA-Tweedie) model and an algorithm to estimate the parameters. We introduce a learning rate adjustment used along with the Fisher scoring method in the inner loop to help the algorithm stay on track of optimizing direction. Gradient descent with Adam update was also considered as an alternative method for the estimation. Simulation studies and an application showed that our algorithm with Fisher scoring and learning rate adjustment outperforms the other two methods. Pseudo-likelihood approach with alternating parameter update was also studied. Numerical studies showed that the pseudo-likelihood approach is not suitable in our shared parameter alternating regression models with unobserved covariates.