XTRA: Cross-Lingual Topic Modeling with Topic and Representation Alignments

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-lingual topic modeling suffers from low topic coherence and inconsistent cross-lingual alignment. To address this, we propose DualAlign—a unified framework integrating bag-of-words representations with multilingual embeddings—featuring a dual-alignment mechanism: it jointly optimizes representation alignment (document/word embeddings) and topic alignment (document-topic and topic-word distributions) within a shared semantic space. DualAlign employs contrastive learning to enhance cross-lingual semantic consistency and introduces probabilistic distribution projection to synergistically optimize neural and traditional topic modeling objectives. Evaluated on multiple multilingual benchmark datasets, DualAlign achieves significant improvements over state-of-the-art baselines: +4.2% in topic coherence, +3.8% in topic diversity, and +6.1% in cross-lingual alignment quality. The method preserves interpretability inherent in probabilistic topic models while demonstrating strong generalization across languages and domains.

Technology Category

Application Category

📝 Abstract
Cross-lingual topic modeling aims to uncover shared semantic themes across languages. Several methods have been proposed to address this problem, leveraging both traditional and neural approaches. While previous methods have achieved some improvements in topic diversity, they often struggle to ensure high topic coherence and consistent alignment across languages. We propose XTRA (Cross-Lingual Topic Modeling with Topic and Representation Alignments), a novel framework that unifies Bag-of-Words modeling with multilingual embeddings. XTRA introduces two core components: (1) representation alignment, aligning document-topic distributions via contrastive learning in a shared semantic space; and (2) topic alignment, projecting topic-word distributions into the same space to enforce crosslingual consistency. This dual mechanism enables XTRA to learn topics that are interpretable (coherent and diverse) and well-aligned across languages. Experiments on multilingual corpora confirm that XTRA significantly outperforms strong baselines in topic coherence, diversity, and alignment quality. Code and reproducible scripts are available at https: //github.com/tienphat140205/XTRA.
Problem

Research questions and friction points this paper is trying to address.

Uncover shared semantic themes across multiple languages
Improve topic coherence and diversity in cross-lingual modeling
Ensure consistent topic alignment across different languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies Bag-of-Words with multilingual embeddings
Aligns document distributions via contrastive learning
Projects topic-word distributions for crosslingual consistency
T
Tien Phat Nguyen
Hanoi University of Science and Technology, Vietnam
V
Vu Minh Ngo
Hanoi University of Science and Technology, Vietnam
Tung Nguyen
Tung Nguyen
PhD student at UCLA
Artificial IntelligenceMachine Learning
L
Linh Van Ngo
Hanoi University of Science and Technology, Vietnam
D
Duc Anh Nguyen
Hanoi University of Science and Technology, Vietnam
S
Sang Dinh
Hanoi University of Science and Technology, Vietnam
Trung Le
Trung Le
Faculty of Information Technology, Monash University, Australia
Adversarial Machine LearningGenerative ModelsModel UnlearningModel EditingOptimal Transport