🤖 AI Summary
To address the challenges of ensuring functional equivalence and low translation accuracy—stemming from scarce high-quality labeled data—in automated translation from serial code (e.g., C) to parallel GPU code (e.g., CUDA), this paper proposes the first bidirectional co-evolutionary mutual supervision framework. It couples a Translator and a Tester that jointly verify semantic equivalence (Co-verify) and iteratively optimize each other (Co-evolve), forming a closed-loop process to generate high-fidelity training data under strict semantic constraints. The method integrates code translation, automatic unit test generation and validation, back-translation augmentation, and is built upon fine-tuned Qwen2.5-Coder. Experiments show significant improvements: Pass@1 increases by 28.91%, Tester accuracy rises by 68.90%, and BLEU/CodeBLEU scores surpass CodeRosetta by 1.56 and 6.92 points, respectively—matching the performance of DeepSeek-R1 and GPT-4.1.
📝 Abstract
The rise of GPU-based high-performance computing (HPC) has driven the widespread adoption of parallel programming models such as CUDA. Yet, the inherent complexity of parallel programming creates a demand for the automated sequential-to-parallel approaches. However, data scarcity poses a significant challenge for machine learning-based sequential-to-parallel code translation. Although recent back-translation methods show promise, they still fail to ensure functional equivalence in the translated code. In this paper, we propose a novel Mutual-Supervised Learning (MSL) framework for sequential-to-parallel code translation to address the functional equivalence issue. MSL consists of two models, a Translator and a Tester. Through an iterative loop consisting of Co-verify and Co-evolve steps, the Translator and the Tester mutually generate data for each other and improve collectively. The Tester generates unit tests to verify and filter functionally equivalent translated code, thereby evolving the Translator, while the Translator generates translated code as augmented input to evolve the Tester. Experimental results demonstrate that MuSL significantly enhances the performance of the base model: when applied to Qwen2.5-Coder, it not only improves Pass@1 by up to 28.91% and boosts Tester performance by 68.90%, but also outperforms the previous state-of-the-art method CodeRosetta by 1.56 and 6.92 in BLEU and CodeBLEU scores, while achieving performance comparable to DeepSeek-R1 and GPT-4.1. Our code is available at https://github.com/kcxain/musl.