Open or Closed LLM for Lesser-Resourced Languages? Lessons from Greek

📅 2025-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses key NLP challenges for low-resource languages—particularly Modern Greek—including data scarcity, cross-lingual interference, and domain-specific adaptation (e.g., legal texts). Methodologically, it introduces three innovations: (1) the first application of zero-shot author attribution as a diagnostic tool for pretraining data provenance, empirically confirming large language models’ memorization of training data; (2) the Summarize-Translate-Embed (STE) paradigm, which substantially improves clustering of long legal documents (F1 score +23.6% over TF-IDF); and (3) a systematic benchmark evaluating Llama-70B and GPT-4o mini across seven core NLP tasks, revealing complementary strengths and divergent ethical behaviors between open- and closed-source models. Results indicate broadly comparable overall performance, establishing a reproducible methodology and empirical benchmark for low-resource language NLP.

Technology Category

Application Category

📝 Abstract
Natural Language Processing (NLP) for lesser-resourced languages faces persistent challenges, including limited datasets, inherited biases from high-resource languages, and the need for domain-specific solutions. This study addresses these gaps for Modern Greek through three key contributions. First, we evaluate the performance of open-source (Llama-70b) and closed-source (GPT-4o mini) large language models (LLMs) on seven core NLP tasks with dataset availability, revealing task-specific strengths, weaknesses, and parity in their performance. Second, we expand the scope of Greek NLP by reframing Authorship Attribution as a tool to assess potential data usage by LLMs in pre-training, with high 0-shot accuracy suggesting ethical implications for data provenance. Third, we showcase a legal NLP case study, where a Summarize, Translate, and Embed (STE) methodology outperforms the traditional TF-IDF approach for clustering emph{long} legal texts. Together, these contributions provide a roadmap to advance NLP in lesser-resourced languages, bridging gaps in model evaluation, task innovation, and real-world impact.
Problem

Research questions and friction points this paper is trying to address.

Natural Language Processing
Resource-Limited Languages
Data Sparsity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Limited-Resource Languages
Authorship Attribution Bias Detection
Legal Document Processing Integration
🔎 Similar Papers
No similar papers found.