🤖 AI Summary
Low-resource Indian languages lack high-quality domain-specific terminology dictionaries—particularly in critical domains such as healthcare and engineering.
Method: We propose a domain-aware multilingual dictionary generation model featuring a novel two-tier encoder architecture (domain-specific/domain-general) with a learnable dynamic routing mechanism, integrated with explicit cross-Indian-language correlation modeling and cross-lingual semantic alignment. The model supports zero-shot and few-shot transfer.
Contribution/Results: We release the first benchmark dataset covering six Indian languages across eight specialized domains. Our model significantly outperforms baselines under zero-shot and few-shot settings across multiple domains, demonstrating robust cross-lingual and cross-domain generalization. This work establishes a new paradigm for automated construction of technical terminology dictionaries for low-resource languages.
📝 Abstract
Lexicon or dictionary generation across domains is of significant societal importance, as it can potentially enhance information accessibility for a diverse user base while preserving language identity. Prior work in the field primarily focuses on bilingual lexical induction, which deals with word alignments using mapping-based or corpora-based approaches. Though initiated by researchers, the research associated with lexicon generation is limited, even more so with domain-specific lexicons. This task becomes particularly important in atypical medical, engineering, and other technical domains, owing to the highly infrequent usage of the terms and negligibly low data availability of technical terms in many low-resource languages. Owing to the research gap in lexicon generation, especially with a limited focus on the domain-specific area, we propose a new model to generate dictionary words for 6 Indian languages in the multi-domain setting. Our model consists of domain-specific and domain-generic layers that encode information, and these layers are invoked via a learnable routing technique. Further, we propose an approach to explicitly leverage the relatedness between these Indian languages toward coherent translation. We also release a new benchmark dataset across 6 Indian languages that span 8 diverse domains that can propel further research in domain-specific lexicon induction. We conduct both zero-shot and few-shot experiments across multiple domains to show the efficacy of our proposed model in generalizing to unseen domains and unseen languages.