Automated Snippet-Alignment Data Augmentation for Code Translation

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code translation research primarily focuses on program-level alignment (PA), neglecting finer-grained snippet-level alignment (SA), thereby limiting models’ capacity to capture local semantic and structural mappings. This work proposes, for the first time, a large language model–based approach to automatically generate high-quality SA parallel data. We further design a two-stage fine-tuning framework that jointly leverages PA and SA: Stage I performs pretraining on program-level data, while Stage II refines the model using snippet-level aligned data. This integration significantly improves the model’s ability to preserve semantic consistency and structural fidelity across cross-lingual code snippets. Evaluated on the TransCoder-test benchmark, our method achieves up to a 3.78% absolute improvement in pass@k, demonstrating its effectiveness, generalizability, and robustness.

Technology Category

Application Category

📝 Abstract
Code translation aims to translate the code from its source language to the target language and is used in various software development scenarios. Recent developments in Large Language Models (LLMs) have showcased their capabilities in code translation, and parallel corpora play a crucial role in training models for code translation. Parallel corpora can be categorized into program-alignment (PA) and snippet-alignment (SA) data. Although PA data has complete context and is suitable for semantic alignment learning, it may not provide adequate fine-grained training signals due to its extended length, while the brevity of SA data enables more fine-grained alignment learning. Due to limited parallel corpora, researchers explore several augmentation methods for code translation. Previous studies mainly focus on augmenting PA data. In this paper, we propose a data augmentation method that leverages LLMs to generate SA data automatically. To fully leverage both PA data and SA data, we explore a simple yet effective two-stage training strategy, which consistently enhances model performance compared to fine-tuning solely on PA data. Experiments on TransCoder-test demonstrate that our augmented SA data combined with the two-stage training approach yields consistent improvements over the baseline, achieving a maximum gain of 3.78% on pass@k.
Problem

Research questions and friction points this paper is trying to address.

Automatically generates snippet-alignment data for code translation
Enhances model training with two-stage strategy using PA and SA data
Improves code translation accuracy by augmenting fine-grained parallel corpora
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically generates snippet-alignment data using LLMs
Implements two-stage training with PA and SA data
Enhances code translation model performance consistently
🔎 Similar Papers
No similar papers found.
Z
Zhiming Zhang
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, Harbin, China
Qingfu Zhu
Qingfu Zhu
Harbin Institute of Technology
NLPCode LLM
Xianzhen Luo
Xianzhen Luo
Harbin Institute of Technology
Code IntelligenceInference Acceleration
Y
Yixuan Wang
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, Harbin, China
B
Bohan Li
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, Harbin, China
Wanxiang Che
Wanxiang Che
Professor of Harbin Institute of Technology
Natural Language Processing