Are Multilingual Language Models an Off-ramp for Under-resourced Languages? Will we arrive at Digital Language Equality in Europe in 2030?

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
European low-resource languages face severe data scarcity in the era of large language models (LLMs), threatening progress toward the EU’s 2030 “digital language equality” objective. Method: This study conducts the first systematic investigation into whether multilingual large language models (MLLMs) can serve as a viable technological bypass. It integrates analysis of multilingual pretraining mechanisms, cross-lingual transfer capabilities, assessments of existing language technology infrastructure, and a comprehensive review of empirical studies. Contribution/Results: The work clarifies the current state of MLLM support for low-resource European languages and identifies three core barriers: imbalanced training data allocation, absence of linguistically appropriate evaluation benchmarks, and insufficient domain generalization. It proposes a coordinated policy–technology framework to address these gaps. The findings provide both theoretical grounding and actionable guidance for advancing AI equity for low-resource languages in Europe.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) demonstrate unprecedented capabilities and define the state of the art for almost all natural language processing (NLP) tasks and also for essentially all Language Technology (LT) applications. LLMs can only be trained for languages for which a sufficient amount of pre-training data is available, effectively excluding many languages that are typically characterised as under-resourced. However, there is both circumstantial and empirical evidence that multilingual LLMs, which have been trained using data sets that cover multiple languages (including under-resourced ones), do exhibit strong capabilities for some of these under-resourced languages. Eventually, this approach may have the potential to be a technological off-ramp for those under-resourced languages for which"native"LLMs, and LLM-based technologies, cannot be developed due to a lack of training data. This paper, which concentrates on European languages, examines this idea, analyses the current situation in terms of technology support and summarises related work. The article concludes by focusing on the key open questions that need to be answered for the approach to be put into practice in a systematic way.
Problem

Research questions and friction points this paper is trying to address.

Explore multilingual LLMs for under-resourced languages.
Assess technology support for European languages.
Identify key questions for practical implementation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual LLMs enhance under-resourced languages
Data scarcity addressed via cross-lingual training
Systematic approach needed for practical application
🔎 Similar Papers
No similar papers found.