๐ค AI Summary
Addressing the trade-off between accuracy and inference latency in automatic code translation, this paper proposes a two-stage training framework. First, DeepSeek-R1 generates structured reasoning traces, enabling construction of a high-quality (source code, reasoning trace, target code) triplet dataset validated for syntactic correctness and functional equivalence. Second, the model is optimized via supervised fine-tuning combined with reinforcement learning. The core innovation lies in explicitly modeling and leveraging intermediate reasoning pathsโenhancing translation quality while substantially reducing computational overhead. Experiments across six cross-language translation tasks demonstrate average improvements of 49.2% in Code Accuracy (CA) and 27.8% in CodeBLEU, alongside reductions of 19.3% in generated tokens and 29.0% in end-to-end inference latency. These gains effectively support human-AI collaborative development scenarios.
๐ Abstract
Code translation is a crucial task in software development and maintenance. While recent advancements in large language models (LLMs) have improved automated code translation accuracy, these gains often come at the cost of increased inference latency, hindering real-world development workflows that involve human-in-the-loop inspection. To address this trade-off, we propose EffiReasonTrans, a training framework designed to improve translation accuracy while balancing inference latency. We first construct a high-quality reasoning-augmented dataset by prompting a stronger language model, DeepSeek-R1, to generate intermediate reasoning and target translations. Each (source code, reasoning, target code) triplet undergoes automated syntax and functionality checks to ensure reliability. Based on this dataset, we employ a two-stage training strategy: supervised fine-tuning on reasoning-augmented samples, followed by reinforcement learning to further enhance accuracy and balance inference latency. We evaluate EffiReasonTrans on six translation pairs. Experimental results show that it consistently improves translation accuracy (up to +49.2% CA and +27.8% CodeBLEU compared to the base model) while reducing the number of generated tokens (up to -19.3%) and lowering inference latency in most cases (up to -29.0%). Ablation studies further confirm the complementary benefits of the two-stage training framework. Additionally, EffiReasonTrans demonstrates improved translation accuracy when integrated into agent-based frameworks. Our code and data are available at https://github.com/DeepSoftwareAnalytics/EffiReasonTrans.