Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval

๐Ÿ“… 2025-02-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing code generation benchmarks (e.g., HumanEval, MBPP) conflate problem-solving ability with language-specific coding proficiency, hindering fine-grained diagnosis of LLM limitations. To address this, we propose PseudoEvalโ€”the first pseudocode-driven, multilingual, decoupled evaluation benchmark. By standardizing pseudocode as the sole input, PseudoEval eliminates natural language understanding confounders, enabling orthogonal assessment of problem-solving and language-coding capabilities. Methodologically, we construct a controllable, multilingual evaluation pipeline covering Python, Rust, and other languages, and publicly release both the benchmark and its generation toolchain. Key findings include: (1) problem-solving ability exhibits strong cross-lingual transferability, whereas language-coding ability is highly language-specific; (2) Python performance bottlenecks stem primarily from problem-solving limitations, while Rust bottlenecks arise predominantly from language-coding constraints. This work establishes a new paradigm for granular model diagnostics and targeted capability optimization.

Technology Category

Application Category

๐Ÿ“ Abstract
Existing code generation benchmarks for Large Language Models (LLMs) such as HumanEval and MBPP are designed to study LLMs' end-to-end performance, where the benchmarks feed a problem description in natural language as input and examine the generated code in specific programming languages. However, the evaluation scores revealed in this way provide a little hint as to the bottleneck of the code generation -- whether LLMs are struggling with their problem-solving capability or language-coding capability. To answer this question, we construct PseudoEval, a multilingual code generation benchmark that provides a solution written in pseudocode as input. By doing so, the bottleneck of code generation in various programming languages could be isolated and identified. Our study yields several interesting findings. For example, we identify that the bottleneck of LLMs in Python programming is problem-solving, while Rust is struggling relatively more in language-coding. Also, our study indicates that problem-solving capability may transfer across programming languages, while language-coding needs more language-specific effort, especially for undertrained programming languages. Finally, we release the pipeline of constructing PseudoEval to facilitate the extension to existing benchmarks. PseudoEval is available at: https://anonymous.4open.science/r/PseudocodeACL25-7B74.
Problem

Research questions and friction points this paper is trying to address.

Isolating language-coding from problem-solving in LLMs
Identifying bottlenecks in multilingual code generation
Developing PseudoEval benchmark for code generation analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pseudocode for benchmark evaluation
Isolates problem-solving from language-coding
Facilitates multilingual code generation analysis
๐Ÿ”Ž Similar Papers
No similar papers found.
Jiarong Wu
Jiarong Wu
Courant Institute of Mathematical Sciences, NYU
fluid dynamicsocean wavesair-sea interactioncomputational fluid dynamics
S
Songqiang Chen
The Hong Kong University of Science and Technology
Jialun Cao
Jialun Cao
The Hong Kong University of Science and Technology
SE for AIAI for SE
H
Hau Ching Lo
The Hong Kong University of Science and Technology
S
S. Cheung
The Hong Kong University of Science and Technology