🤖 AI Summary
Existing code translation research primarily focuses on program-level alignment (PA), neglecting finer-grained snippet-level alignment (SA), thereby limiting models’ capacity to capture local semantic and structural mappings. This work proposes, for the first time, a large language model–based approach to automatically generate high-quality SA parallel data. We further design a two-stage fine-tuning framework that jointly leverages PA and SA: Stage I performs pretraining on program-level data, while Stage II refines the model using snippet-level aligned data. This integration significantly improves the model’s ability to preserve semantic consistency and structural fidelity across cross-lingual code snippets. Evaluated on the TransCoder-test benchmark, our method achieves up to a 3.78% absolute improvement in pass@k, demonstrating its effectiveness, generalizability, and robustness.
📝 Abstract
Code translation aims to translate the code from its source language to the target language and is used in various software development scenarios. Recent developments in Large Language Models (LLMs) have showcased their capabilities in code translation, and parallel corpora play a crucial role in training models for code translation. Parallel corpora can be categorized into program-alignment (PA) and snippet-alignment (SA) data. Although PA data has complete context and is suitable for semantic alignment learning, it may not provide adequate fine-grained training signals due to its extended length, while the brevity of SA data enables more fine-grained alignment learning. Due to limited parallel corpora, researchers explore several augmentation methods for code translation. Previous studies mainly focus on augmenting PA data. In this paper, we propose a data augmentation method that leverages LLMs to generate SA data automatically. To fully leverage both PA data and SA data, we explore a simple yet effective two-stage training strategy, which consistently enhances model performance compared to fine-tuning solely on PA data. Experiments on TransCoder-test demonstrate that our augmented SA data combined with the two-stage training approach yields consistent improvements over the baseline, achieving a maximum gain of 3.78% on pass@k.