SpecTra: Enhancing the Code Translation Ability of Language Models by Generating Multi-Modal Specifications

📅 2024-05-28
🏛️ arXiv.org
📈 Citations: 17
Influential: 0
📄 PDF
🤖 AI Summary
Existing code translation methods rely solely on source code input, leading to insufficient program semantic understanding by large language models (LLMs). To address this, we propose SpecTra, a novel multi-stage framework that introduces the first self-consistency–filtered multimodal specification generation mechanism. SpecTra automatically extracts and fuses static specifications, test cases, and natural language (NL) descriptions to explicitly encode implicit semantics into structured specifications, thereby enabling a source-code–and-specification–co-driven translation paradigm. Leveraging static analysis, automated test generation, NL description synthesis, and multimodal prompt engineering, SpecTra significantly improves the performance of six mainstream LLMs across three translation tasks—C→Rust, C→Go, and JavaScript→TypeScript—achieving up to a 10-percentage-point absolute accuracy gain (26% relative improvement).

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly being used for the task of automated code translation, which has important real-world applications. However, most existing approaches use only the source code of a program as an input to an LLM, and do not consider the different kinds of specifications that can be extracted from a program. In this paper, we propose SpecTra, a multi-stage approach that uses a novel self-consistency filter to first generate high-quality static specifications, test cases, and natural language descriptions from a given program, and then uses these along with the source code to improve the quality of LLM-generated translations. We evaluate SpecTra on three code translation tasks - C to Rust, C to Go, and JavaScript to TypeScript - and show that it can enhance the performance of six popular LLMs on these tasks by up to 10 percentage points and a relative improvement of 26%. Our research suggests that generating high-quality specifications could be a promising and efficient way to improve the performance of LLMs for code translation. We make our code and data available, anonymized for review.
Problem

Research questions and friction points this paper is trying to address.

Enhances code translation by generating multi-modal specifications
Improves LLM performance with static specs, tests, and descriptions
Addresses limitations of using only source code as input
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates multi-modal specifications from source code
Uses self-consistency filter to ensure specification quality
Integrates specifications with source code to improve translation
🔎 Similar Papers
2024-03-252024 IEEE/ACM First International Conference on AI Foundation Models and Software Engineering (Forge) Conference Acronym:Citations: 22