Quantifying consistency and accuracy of Latent Dirichlet Allocation

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Latent Dirichlet Allocation (LDA) suffers from instability due to stochastic initialization and sampling, undermining reproducibility and semantic reliability. Method: We construct a synthetic corpus with known ground-truth topic structure and conduct 50 independent replications, proposing a novel stability metric that jointly quantifies topic coherence and similarity. Contribution/Results: Our analysis reveals that while LDA accurately estimates the number of latent topics and achieves high internal coherence, the inferred topics systematically deviate from the true semantic structure—exhibiting “stable incorrectness.” This indicates that LDA frequently misinterprets statistical coincidences as semantically meaningful topics, challenging its suitability for interpretability-critical applications. To our knowledge, this work provides the first systematic quantification of the accuracy–coherence trade-off in LDA, establishing both theoretical insight and empirical benchmarks for model evaluation and refinement.

Technology Category

Application Category

📝 Abstract
Topic modelling in Natural Language Processing uncovers hidden topics in large, unlabelled text datasets. It is widely applied in fields such as information retrieval, content summarisation, and trend analysis across various disciplines. However, probabilistic topic models can produce different results when rerun due to their stochastic nature, leading to inconsistencies in latent topics. Factors like corpus shuffling, rare text removal, and document elimination contribute to these variations. This instability affects replicability, reliability, and interpretation, raising concerns about whether topic models capture meaningful topics or just noise. To address these problems, we defined a new stability measure that incorporates accuracy and consistency and uses the generative properties of LDA to generate a new corpus with ground truth. These generated corpora are run through LDA 50 times to determine the variability in the output. We show that LDA can correctly determine the underlying number of topics in the documents. We also find that LDA is more internally consistent, as the multiple reruns return similar topics; however, these topics are not the true topics.
Problem

Research questions and friction points this paper is trying to address.

Quantifying LDA topic model consistency and accuracy
Assessing variability in results due to stochastic nature
Evaluating if topics reflect true structure or noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defined new stability measure for LDA evaluation
Used generative properties to create ground truth corpus
Ran LDA 50 times to analyze output variability
🔎 Similar Papers
No similar papers found.
S
Saranzaya Magsarjav
The School of Computer and Mathematical Sciences, The University of Adelaide, South Australia 5005, Australia
M
Melissa Humphries
The School of Computer and Mathematical Sciences, The University of Adelaide, South Australia 5005, Australia
J
Jonathan Tuke
The School of Computer and Mathematical Sciences, The University of Adelaide, South Australia 5005, Australia
Lewis Mitchell
Lewis Mitchell
Professor of Data Science, University of Adelaide
online social networkscomputational social sciencedata sciencecomplex systemsdata assimilation