🤖 AI Summary
To address catastrophic forgetting and inadequate modeling of dynamic topic evolution in continual learning, this paper proposes a neural Continual Semantic Topic Model (CSTM) endowed with long-term memory capability. CSTM introduces an incrementally updatable global prior distribution within a variational inference framework, enabling online assimilation of new data while preserving historical topic knowledge. Unlike static Dynamic Topic Models (DTMs) and memoryless online topic models, CSTM jointly models topic diversity, temporal evolution, and knowledge retention in a unified framework. Experiments demonstrate that CSTM significantly outperforms state-of-the-art DTMs in both topic coherence and predictive perplexity. Moreover, it captures fine-grained topic evolution more accurately and promptly, reflecting its superior capacity for continual adaptation without sacrificing historical understanding.
📝 Abstract
In continual learning, our aim is to learn a new task without forgetting what was learned previously. In topic models, this translates to learning new topic models without forgetting previously learned topics. Previous work either considered Dynamic Topic Models (DTMs), which learn the evolution of topics based on the entire training corpus at once, or Online Topic Models, which are updated continuously based on new data but do not have long-term memory. To fill this gap, we propose the Continual Neural Topic Model (CoNTM), which continuously learns topic models at subsequent time steps without forgetting what was previously learned. This is achieved using a global prior distribution that is continuously updated. In our experiments, CoNTM consistently outperformed the dynamic topic model in terms of topic quality and predictive perplexity while being able to capture topic changes online. The analysis reveals that CoNTM can learn more diverse topics and better capture temporal changes than existing methods.