🤖 AI Summary
Traditional topic models struggle to discover fine-grained, interpretable topic patterns in low-resource domains. To address this, we propose an aspect-oriented weighted topic modeling framework. Our method incorporates domain knowledge to define multi-dimensional aspect keywords and—novelly—integrates aspect-level semantic weights into standard topic modeling pipelines (e.g., LDA). It dynamically refines topic representations via aspect–topic correlation computation and selects highly relevant new documents. The core contribution lies in enabling aspect-aware, fine-grained topic optimization and content discovery. Experimental results demonstrate that high-scoring documents achieve 89.3% topic coverage on target aspects, substantially improving aspect coherence and model interpretability. This work establishes a new paradigm for domain-specific topic evolution analysis and empirical validation.
📝 Abstract
Topic modeling often requires examining topics from multiple perspectives to uncover hidden patterns, especially in less explored areas. This paper presents an approach to address this need, utilizing weighted keywords from various aspects derived from a domain knowledge. The research method starts with standard topic modeling. Then, it adds a process consisting of four key steps. First, it defines keywords for each aspect. Second, it gives weights to these keywords based on their relevance. Third, it calculates relevance scores for aspect-weighted keywords and topic keywords to create aspect-topic models. Fourth, it uses these scores to tune relevant new documents. Finally, the generated topic models are interpreted and validated. The findings show that top-scoring documents are more likely to be about the same aspect of a topic. This highlights the model's effectiveness in finding the related documents to the aspects.