Topic Analysis with Side Information: A Neural-Augmented LDA Approach

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional topic models (e.g., LDA) struggle to effectively incorporate auxiliary information—such as metadata, user attributes, or document labels—leading to limited expressiveness, poor personalization, and reduced interpretability. To address this, we propose a neural-augmented LDA framework centered on an **auxiliary-feature-driven neural prior mechanism**: a neural network dynamically captures nonlinear relationships between auxiliary features and topic distributions, generating conditional Dirichlet priors. The entire probabilistic model is jointly optimized end-to-end with the neural component via stochastic variational EM. Extensive experiments on multiple benchmark datasets demonstrate that our method significantly outperforms standard LDA and Dirichlet-Multinomial Regression across key metrics—including topic coherence, perplexity, and downstream classification accuracy—achieving state-of-the-art performance. The framework thus offers enhanced expressiveness, effective personalized modeling, and principled interpretability.

Technology Category

Application Category

📝 Abstract
Traditional topic models such as Latent Dirichlet Allocation (LDA) have been widely used to uncover latent structures in text corpora, but they often struggle to integrate auxiliary information such as metadata, user attributes, or document labels. These limitations restrict their expressiveness, personalization, and interpretability. To address this, we propose nnLDA, a neural-augmented probabilistic topic model that dynamically incorporates side information through a neural prior mechanism. nnLDA models each document as a mixture of latent topics, where the prior over topic proportions is generated by a neural network conditioned on auxiliary features. This design allows the model to capture complex nonlinear interactions between side information and topic distributions that static Dirichlet priors cannot represent. We develop a stochastic variational Expectation-Maximization algorithm to jointly optimize the neural and probabilistic components. Across multiple benchmark datasets, nnLDA consistently outperforms LDA and Dirichlet-Multinomial Regression in topic coherence, perplexity, and downstream classification. These results highlight the benefits of combining neural representation learning with probabilistic topic modeling in settings where side information is available.
Problem

Research questions and friction points this paper is trying to address.

Integrating auxiliary metadata into topic models to enhance expressiveness
Capturing nonlinear interactions between side information and topic distributions
Improving topic coherence and classification with neural-augmented LDA
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network generates topic proportion priors
Stochastic variational EM optimizes neural components
Model captures nonlinear side information interactions
🔎 Similar Papers
No similar papers found.
B
Biyi Fang
Northwestern University
K
Kripa Rajshekhar
Metonymize
T
Truong Vo
Northwestern University
Diego Klabjan
Diego Klabjan
Northwestern University
Machine learning