Neural Topic Modeling with Large Language Models in the Loop

📅 2024-11-13
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address incomplete topic coverage, semantic misalignment, and low inference efficiency when directly applying large language models (LLMs) to topic modeling, this paper proposes an LLM-in-the-loop collaborative framework that synergistically integrates LLMs with neural topic models (NTMs). Our core contribution is a dynamic confidence-driven topic alignment mechanism grounded in optimal transport, which complements LLMs’ strong semantic understanding with NTMs’ efficient latent-variable representation—without requiring LLM fine-tuning and maintaining compatibility with mainstream NTM architectures. The framework preserves document representation fidelity while substantially enhancing topic interpretability and coherence. Extensive experiments on multiple benchmark datasets demonstrate consistent superiority over both existing LLM-augmented and purely neural topic modeling approaches, exhibiting strong generalization and controllable computational overhead.

Technology Category

Application Category

📝 Abstract
Topic modeling is a fundamental task in natural language processing, allowing the discovery of latent thematic structures in text corpora. While Large Language Models (LLMs) have demonstrated promising capabilities in topic discovery, their direct application to topic modeling suffers from issues such as incomplete topic coverage, misalignment of topics, and inefficiency. To address these limitations, we propose LLM-ITL, a novel LLM-in-the-loop framework that integrates LLMs with Neural Topic Models (NTMs). In LLM-ITL, global topics and document representations are learned through the NTM. Meanwhile, an LLM refines these topics using an Optimal Transport (OT)-based alignment objective, where the refinement is dynamically adjusted based on the LLM's confidence in suggesting topical words for each set of input words. With the flexibility of being integrated into many existing NTMs, the proposed approach enhances the interpretability of topics while preserving the efficiency of NTMs in learning topics and document representations. Extensive experiments demonstrate that LLM-ITL helps NTMs significantly improve their topic interpretability while maintaining the quality of document representation. Our code and datasets are available at https://github.com/Xiaohao-Yang/LLM-ITL
Problem

Research questions and friction points this paper is trying to address.

Enhances topic interpretability in neural topic models
Improves alignment and coverage of discovered topics
Combines LLMs with NTMs for efficient topic modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs with Neural Topic Models
Uses Optimal Transport for topic alignment
Dynamically adjusts refinement based on LLM confidence
Xiaohao Yang
Xiaohao Yang
Google
Pair Distribution FunctionX-rayDiffraction
H
He Zhao
CSIRO’s Data61, Australia
Weijie Xu
Weijie Xu
Amazon
LLM FairnessLLM EvaluationTabular LLM
Y
Yuanyuan Qi
Faculty of IT, Monash University, Australia
Jueqing Lu
Jueqing Lu
Monash University
Machine Learning
D
Dinh Q. Phung
Faculty of IT, Monash University, Australia
L
Lan Du
Faculty of IT, Monash University, Australia