Learning Transfers over Several Programming Languages

📅 2023-10-25
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit degraded performance on low-resource programming languages (e.g., COBOL, Rust, Swift) due to insufficient training data. Method: This paper systematically investigates cross-lingual transfer learning, introducing the first large-scale empirical framework for characterizing transfer patterns across programming languages—evaluated across 11–41 languages and 1,808 task-language combinations on code completion, translation, and repair. Contribution/Results: We empirically identify Kotlin and JavaScript as optimal source languages; uncover task-specific, heterogeneous dependencies on source-language features—challenging natural-language transfer paradigms; and develop both a principled source-language selection guide and a feature-based prediction model. Our approach significantly improves performance on low-resource languages across diverse coding tasks, establishing a scalable methodology for legacy system modernization and AI support for emerging programming languages.
📝 Abstract
Large language models (LLMs) have become remarkably good at improving developer productivity for high-resource programming languages. These models use two kinds of data: large amounts of unlabeled code samples for pre-training and relatively smaller amounts of labeled code samples for fine-tuning or in-context learning. Unfortunately, many programming languages are low-resource, lacking labeled samples for most tasks and often even lacking unlabeled samples. Therefore, users of low-resource languages (e.g., legacy or new languages) miss out on the benefits of LLMs. Cross-lingual transfer uses data from a source language to improve model performance on a target language. It has been well-studied for natural languages, but has received little attention for programming languages. This paper reports extensive experiments on four tasks using a transformer-based LLM and 11 to 41 programming languages to explore the following questions. First, how well does cross-lingual transfer work for a given task across different language pairs. Second, given a task and target language, how should one choose a source language. Third, which characteristics of a language pair are predictive of transfer performance, and how does that depend on the given task. Our empirical study with 1,808 experiments reveals practical and scientific insights, such as Kotlin and JavaScript being the most transferable source languages and different tasks relying on substantially different features. Overall, we find that learning transfers well across several programming languages.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM performance for low-resource programming languages
Investigating transfer learning across 10-41 programming languages
Predicting optimal source languages for cross-lingual transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfer learning boosts low-resource language performance
Predicts best source languages using linguistic features
Cross-lingual transfer surpasses zero-shot learning effectiveness
🔎 Similar Papers
No similar papers found.
R
Razan Baltaji
University of Illinois
S
Saurabh Pujar
IBM Research
Louis Mandel
Louis Mandel
IBM Research
Computer Science
Martin Hirzel
Martin Hirzel
IBM Research
Programming LanguagesData ManagementAI
L
Luca Buratti
IBM Research
L
L. Varshney
University of Illinois