ACT: Bridging the Gap in Code Translation through Synthetic Data Generation & Adaptive Training

๐Ÿ“… 2025-07-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Code translation faces two key challenges: rule-based approaches lack generalizability, while reliance on proprietary large language models (LLMs) introduces data privacy risks and vendor lock-in. This paper proposes an end-to-end optimization framework for open-source LLMs. Our method addresses these issues through three core contributions: (1) automated generation of high-quality cross-lingual parallel code corpora, validated by execution-level unit tests to ensure functional equivalence; (2) a learnable controller module that dynamically orchestrates data synthesis, fine-tuning, and evaluation in a closed-loop, adaptive training pipeline; and (3) integration of execution-driven evaluation with iterative hyperparameter tuning. Evaluated on multilingual benchmarks and real-world industrial migration tasks, our approach substantially narrows the performance gap between open-source and proprietary LLMsโ€”improving translation accuracy and reliability. The framework establishes a new paradigm for secure, controllable, and high-performance enterprise-grade code migration.

Technology Category

Application Category

๐Ÿ“ Abstract
Code translation is a crucial process in software development and migration projects, enabling interoperability between different programming languages and enhancing software adaptability and thus longevity. Traditional automated translation methods rely heavily on handcrafted transformation rules, which often lack flexibility and scalability. Meanwhile, advanced language models present promising alternatives but are often limited by proprietary, API-based implementations that raise concerns over data security and reliance. In this paper, we present Auto-Train for Code Translation (ACT), an innovative framework that aims to improve code translation capabilities by enabling in-house finetuning of open-source Large Language Models (LLMs). ACT's automated pipeline significantly boosts the performance of these models, narrowing the gap between open-source accessibility and the high performance of closed-source solutions. Central to ACT is its synthetic data generation module, which builds extensive, high-quality datasets from initial code samples, incorporating unit tests to ensure functional accuracy and diversity. ACT's evaluation framework incorporates execution-level checks, offering a comprehensive assessment of translation quality. A key feature in ACT is its controller module, which manages the entire pipeline by dynamically adjusting hyperparameters, orchestrating iterative data generation, and finetuning based on real-time evaluations. This enables ACT to intelligently optimize when to continue training, generate additional targeted training data, or stop the process. Our results demonstrate that ACT consistently enhances the effectiveness of open-source models, offering businesses and developers a secure and reliable alternative. Additionally, applying our data generation pipeline to industry-scale migration projects has led to a notable increase in developer acceleration.
Problem

Research questions and friction points this paper is trying to address.

Improving code translation using synthetic data generation
Enhancing open-source LLMs for secure in-house finetuning
Automating dynamic hyperparameter adjustment for optimal translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic data generation enhances translation quality
Dynamic hyperparameter adjustment optimizes training process
Execution-level checks ensure comprehensive translation assessment
๐Ÿ”Ž Similar Papers
2024-03-252024 IEEE/ACM First International Conference on AI Foundation Models and Software Engineering (Forge) Conference Acronym:Citations: 22