๐ค AI Summary
This work proposes an end-to-end global contextual space framework that unifies cross-lingual topic modeling within a shared semantic space, addressing the limitations of existing approaches that operate in disjoint language spaces and rely on external alignment mechanisms. By leveraging a local-global dual-encoder architecture, the model integrates multilingual semantics across input, hidden, and output layers. It further enhances structural and semantic consistency through lexical neighborhood expansion, internal regularization, and a Centered Kernel Alignment (CKA) loss. Experimental results demonstrate that the proposed method significantly outperforms strong baselines on multiple benchmark datasets, achieving superior topic coherence and cross-lingual alignment performance by effectively harnessing the rich semantic signals embedded in multilingual pre-trained representations.
๐ Abstract
Cross-lingual topic modeling seeks to uncover coherent and semantically aligned topics across languages - a task central to multilingual understanding. Yet most existing models learn topics in disjoint, language-specific spaces and rely on alignment mechanisms (e.g., bilingual dictionaries) that often fail to capture deep cross-lingual semantics, resulting in loosely connected topic spaces. Moreover, these approaches often overlook the rich semantic signals embedded in multilingual pretrained representations, further limiting their ability to capture fine-grained alignment. We introduce GloCTM (Global Context Space for Cross-Lingual Topic Model), a novel framework that enforces cross-lingual topic alignment through a unified semantic space spanning the entire model pipeline. GloCTM constructs enriched input representations by expanding bag-of-words with cross-lingual lexical neighborhoods, and infers topic proportions using both local and global encoders, with their latent representations aligned through internal regularization. At the output level, the global topic-word distribution, defined over the combined vocabulary, structurally synchronizes topic meanings across languages. To further ground topics in deep semantic space, GloCTM incorporates a Centered Kernel Alignment (CKA) loss that aligns the latent topic space with multilingual contextual embeddings. Experiments across multiple benchmarks demonstrate that GloCTM significantly improves topic coherence and cross-lingual alignment, outperforming strong baselines.