🤖 AI Summary
Existing neural topic models suffer from insufficient topic coherence and diversity, as well as poor noise robustness, primarily due to their neglect of fine-grained contextual information surrounding candidate centroid words—rendering them vulnerable to interference from function words. To address this, we propose a corpus-aware candidate centroid word modeling framework. First, we introduce token-level self-similarity as a novel topicality discrimination metric to bridge the contextualization gap. Second, we integrate contextualized embeddings, contrastive learning–inspired token filtering, and an adaptive selection mechanism to achieve noise-robust topic discovery. Evaluated on news and Twitter datasets, our approach achieves a 3.2% improvement in topic coherence and a 4.7% gain in topic diversity over state-of-the-art baselines, demonstrating comprehensive superiority across all major metrics.
📝 Abstract
Topic modelling is a pivotal unsupervised machine learning technique for extracting valuable insights from large document collections. Existing neural topic modelling methods often encode contextual information of documents, while ignoring contextual details of candidate centroid words, leading to the inaccurate selection of topic words due to the contextualization gap. In parallel, it is found that functional words are frequently selected over topical words. To address these limitations, we introduce CAST: Corpus-Aware Self-similarity Enhanced Topic modelling, a novel topic modelling method that builds upon candidate centroid word embeddings contextualized on the dataset, and a novel self-similarity-based method to filter out less meaningful tokens. Inspired by findings in contrastive learning that self-similarities of functional token embeddings in different contexts are much lower than topical tokens, we find self-similarity to be an effective metric to prevent functional words from acting as candidate topic words. Our approach significantly enhances the coherence and diversity of generated topics, as well as the topic model's ability to handle noisy data. Experiments on news benchmark datasets and one Twitter dataset demonstrate the method's superiority in generating coherent, diverse topics, and handling noisy data, outperforming strong baselines.