π€ AI Summary
Temporal Topic (ToT) models suffer from unstable online inference due to their non-full-Bayesian formulation and struggle to reconcile the modality mismatch between sparse temporal stamps and dense word frequencies. To address these issues, we propose Full-Bayesian ToT (FB-ToT): the first ToT framework that introduces conjugate priors for Beta-distributed topic proportions, enabling rigorous Bayesian inference; and Weighted Timestamp Replication (WB-ToT), a novel mechanism that explicitly models scale discrepancies between temporal and lexical dimensions. FB-ToT integrates full-Bayesian variational inference with online stochastic optimization, supporting scalable streaming inference. Evaluated on the State-of-the-Union (SOTU) corpus and a million-scale COVID-19 Twitter dataset, FB-ToT reduces temporal topic existence error by 51% and 34% compared to LDA and BERTopic, respectively, while significantly improving topic coherence. The method demonstrates strong robustness and scalability.
π Abstract
The Topics over Time (ToT) model captures thematic changes in timestamped datasets by explicitly modeling publication dates jointly with word co-occurrence patterns. However, ToT was not approached in a fully Bayesian fashion, a flaw that makes it susceptible to stability problems. To address this issue, we propose a fully Bayesian Topics over Time (BToT) model via the introduction of a conjugate prior to the Beta distribution. This prior acts as a regularization that prevents the online version of the algorithm from unstable updates when a topic is poorly represented in a mini-batch. The characteristics of this prior to the Beta distribution are studied here for the first time. Still, this model suffers from a difference in scale between the single-time observations and the multiplicity of words per document. A variation of BToT, Weighted Bayesian Topics over Time (WBToT), is proposed as a solution. In WBToT, publication dates are repeated a certain number of times per document, which balances the relative influence of words and timestamps along the inference process. We have tested our models on two datasets: a collection of over 200 years of US state-of-the-union (SOTU) addresses and a large-scale COVID-19 Twitter corpus of 10 million tweets. The results show that WBToT captures events better than Latent Dirichlet Allocation and other SOTA topic models like BERTopic: the median absolute deviation of the topic presence over time is reduced by $51%$ and $34%$, respectively. Our experiments also demonstrate the superior coherence of WBToT over BToT, which highlights the importance of balancing the time and word modalities. Finally, we illustrate the stability of the online optimization algorithm in WBToT, which allows the application of WBToT to problems that are intractable for standard ToT.