How Well Do LLMs Generate Code for Different Application Domains? Benchmark and Evaluation

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation benchmarks primarily target general-purpose scenarios, lacking fine-grained evaluation for specialized software domains and multilingual adaptability. Method: We propose MultiCodeBench—the first domain-aware code generation benchmark covering 12 application domains and 15 programming languages, comprising 2,400 tasks manually rewritten and validated via static dependency analysis. It employs a hierarchical construction paradigm (organized by domain and technical framework), static-analysis-driven deep attribution, and a GitHub sampling and docstring rewriting pipeline designed to prevent data leakage. Contribution/Results: Systematic evaluation across 11 state-of-the-art code large language models reveals substantial cross-domain performance disparities (e.g., superior performance in web development versus embedded systems) and identifies critical failure modes—including inadequate framework adaptation and API misinterpretation—providing empirical foundations for both model improvement and practitioner tool selection.

Technology Category

Application Category

📝 Abstract
Recently, an increasing number of AI-driven programming assistants powered by code LLMs have been integrated into various real-world software development environments, significantly boosting developer productivity. However, existing code generation benchmarks primarily focus on general-purpose scenarios, leaving the code generation performance of LLMs for specific application domains largely unknown. In this paper, we introduce a new benchmark, MultiCodeBench, to fill this gap. MultiCodeBench comprises 2,400 programming tasks, covering 12 popular software development domains and 15 programming languages. Specifically, we perform in-depth research to identify these 12 application domains. Given that each domain may involve multiple technical frameworks, and that different frameworks present distinct challenges in the coding process, we categorize the commonly used frameworks and platforms within each domain. We then sample programming problems from GitHub repositories related to these subdomains. To ensure the quality of the tasks and mitigate data leakage issues, we invite annotators to rewrite the docstrings for each task in MultiCodeBench. Additionally, we build a static analysis-based dependency parsing tool to extract the dependencies in the ground truth for each task, enabling deeper performance analysis. Through extensive experiments on MultiCodeBench with eleven representative mainstream LLMs, we reveal the code generation performance of the LLMs across different application domains, providing practical insights for developers in downstream fields when selecting LLMs. Furthermore, we analyze the reasons behind the models' failures in completing software application development tasks, offering guidance for model developers to enhance domain-specific code generation capabilities.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Specialized Software Development
Code Generation Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

MultiCodeBench
Large Language Models
Code Generation
🔎 Similar Papers
No similar papers found.