π€ AI Summary
Existing large language models (LLMs) for code translation prioritize functional correctness while neglecting execution efficiency evaluation. Method: We introduce TRACY, the first efficiency-aware multilingual code translation benchmark covering C++, Java, and Python, comprising 1,011 tasksβeach with an average of 22.1 reference translations and 10 high-load stress tests. TRACY employs a novel two-stage LLM-driven pipeline: (1) generating resource-sensitive stress tests, and (2) efficiency-guided task pruning; it integrates multilingual runtime validation and fine-grained resource profiling. Contribution/Results: Experiments reveal severe efficiency deficiencies across state-of-the-art models: algorithmic inefficiencies induce a median 5.6Γ runtime overhead, and several smaller models outperform larger ones in efficiency. TRACY establishes a rigorous foundation for evaluating and advancing efficiency-aware code translation.
π Abstract
Automatic code translation is a fundamental task in modern software development. While the advent of Large Language Models (LLMs) has significantly improved the correctness of code translation, the critical dimension of execution efficiency remains overlooked. To address this gap, we introduce TRACY, the first comprehensive benchmark designed to evaluate the execution efficiency of LLM-translated code. TRACY is constructed through an LLM-driven two-stage pipeline: an initial stage generates a suite of stress tests to amplify performance differences, followed by an efficiency-oriented task pruning stage that isolates the efficiency-distinguishing tasks. The resulting benchmark comprises 1,011 code translation tasks across C++, Java, and Python, each accompanied by an average of 22.1 verified reference translations and 10 computationally demanding tests. Our extensive evaluation of 26 representative LLMs reveals that even top-tier LLMs struggle to consistently produce efficient code translations. For instance, Claude-4-think, the leading model for correctness, ranks eighth overall when time efficiency is taken into account, surpassed by several smaller open-source models. We further pinpoint that algorithmic flaws and improper resource handling are the most detrimental, causing a median time slowdown of 5.6$ imes$ and memory increase of 12.0$ imes$, respectively. Our work underscores the necessity of jointly optimizing for correctness and efficiency in future LLM-based code translation.