🤖 AI Summary
To address weak cross-domain transferability of general models, large alignment errors for low-frequency word pairs under few-shot settings, and insufficient contextual sensitivity in domain-specific bilingual lexicon induction (BLI), this paper proposes a cross-domain BLI method integrating domain adaptation with a code-switching mechanism. The approach constructs dynamic word embeddings based on pretrained language models and enhances domain-specific semantic alignment through controlled, word-level cross-lingual mixing. It further refines alignment via iterative cross-lingual space mapping. Notably, this work is the first to introduce code-switching into BLI, effectively mitigating semantic drift of static embeddings in professional contexts. Experiments across medicine, law, and finance demonstrate that our method achieves an average accuracy gain of 0.78 points over strong baselines—including MUSE and VecMap—highlighting its superiority in domain-adaptive lexical alignment.
📝 Abstract
Bilingual Lexicon Induction (BLI) is generally based on common domain data to obtain monolingual word embedding, and by aligning the monolingual word embeddings to obtain the cross-lingual embeddings which are used to get the word translation pairs. In this paper, we propose a new task of BLI, which is to use the monolingual corpus of the general domain and target domain to extract domain-specific bilingual dictionaries. Motivated by the ability of Pre-trained models, we propose a method to get better word embeddings that build on the recent work on BLI. This way, we introduce the Code Switch(Qin et al., 2020) firstly in the cross-domain BLI task, which can match differit is yet to be seen whether these methods are suitable for bilingual lexicon extraction in professional fields. As we can see in table 1, the classic and efficient BLI approach, Muse and Vecmap, perform much worse on the Medical dataset than on the Wiki dataset. On one hand, the specialized domain data set is relatively smaller compared to the generic domain data set generally, and specialized words have a lower frequency, which will directly affect the translation quality of bilingual dictionaries. On the other hand, static word embeddings are widely used for BLI, however, in some specific fields, the meaning of words is greatly influenced by context, in this case, using only static word embeddings may lead to greater bias. ent strategies in different contexts, making the model more suitable for this task. Experimental results show that our method can improve performances over robust BLI baselines on three specific domains by averagely improving 0.78 points.