E-LDA: Toward Interpretable LDA Topic Models with Strong Guarantees in Logarithmic Parallel Time

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical problem of document-level topic inference in Latent Dirichlet Allocation (LDA) models. Methodologically, it introduces the first non-gradient, combinatorial algorithm for LDA topic estimation—grounded in combinatorial optimization—and achieves logarithmic parallel adaptive convergence, a theoretical first for this setting. Through formal interpretability analysis, the method rigorously guarantees both semantic interpretability of top keywords and statistical independence across inferred topics, thereby enabling causal inference. Compared to state-of-the-art approaches—including standard LDA, neural topic models, and large language model–based methods—the proposed algorithm significantly improves semantic coherence and topic quality across diverse text corpora, while providing strict posterior optimality guarantees.

Technology Category

Application Category

📝 Abstract
In this paper, we provide the first practical algorithms with provable guarantees for the problem of inferring the topics assigned to each document in an LDA topic model. This is the primary inference problem for many applications of topic models in social science, data exploration, and causal inference settings. We obtain this result by showing a novel non-gradient-based, combinatorial approach to estimating topic models. This yields algorithms that converge to near-optimal posterior probability in logarithmic parallel computation time (adaptivity) -- exponentially faster than any known LDA algorithm. We also show that our approach can provide interpretability guarantees such that each learned topic is formally associated with a known keyword. Finally, we show that unlike alternatives, our approach can maintain the independence assumptions necessary to use the learned topic model for downstream causal inference methods that allow researchers to study topics as treatments. In terms of practical performance, our approach consistently returns solutions of higher semantic quality than solutions from state-of-the-art LDA algorithms, neural topic models, and LLM-based topic models across a diverse range of text datasets and evaluation parameters.
Problem

Research questions and friction points this paper is trying to address.

Infer topics per document in LDA with provable guarantees
Achieve interpretability by linking topics to known keywords
Maintain topic independence for downstream causal inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-gradient combinatorial topic estimation
Logarithmic parallel computation time
Interpretability with known keywords
🔎 Similar Papers
No similar papers found.