🤖 AI Summary
Latent Dirichlet Allocation (LDA) suffers from instability due to stochastic initialization and sampling, undermining reproducibility and semantic reliability. Method: We construct a synthetic corpus with known ground-truth topic structure and conduct 50 independent replications, proposing a novel stability metric that jointly quantifies topic coherence and similarity. Contribution/Results: Our analysis reveals that while LDA accurately estimates the number of latent topics and achieves high internal coherence, the inferred topics systematically deviate from the true semantic structure—exhibiting “stable incorrectness.” This indicates that LDA frequently misinterprets statistical coincidences as semantically meaningful topics, challenging its suitability for interpretability-critical applications. To our knowledge, this work provides the first systematic quantification of the accuracy–coherence trade-off in LDA, establishing both theoretical insight and empirical benchmarks for model evaluation and refinement.
📝 Abstract
Topic modelling in Natural Language Processing uncovers hidden topics in large, unlabelled text datasets. It is widely applied in fields such as information retrieval, content summarisation, and trend analysis across various disciplines. However, probabilistic topic models can produce different results when rerun due to their stochastic nature, leading to inconsistencies in latent topics. Factors like corpus shuffling, rare text removal, and document elimination contribute to these variations. This instability affects replicability, reliability, and interpretation, raising concerns about whether topic models capture meaningful topics or just noise. To address these problems, we defined a new stability measure that incorporates accuracy and consistency and uses the generative properties of LDA to generate a new corpus with ground truth. These generated corpora are run through LDA 50 times to determine the variability in the output. We show that LDA can correctly determine the underlying number of topics in the documents. We also find that LDA is more internally consistent, as the multiple reruns return similar topics; however, these topics are not the true topics.