🤖 AI Summary
Current benchmarks for evaluating code-generating agents suffer from narrow task scopes, limiting their ability to assess comprehensive capabilities in realistic software engineering contexts. This work introduces a multilingual, multitask evaluation benchmark spanning Python, Java, and C++, encompassing 1,794 high-quality samples across four task categories: bug fixing, test generation, code review repair, and style correction. Task diversity, data quality, and absence of leakage are ensured through rigorous human validation and synthetic data generation strategies. Empirical evaluation reveals that while state-of-the-art agents achieve moderate success on Python bug-fixing tasks, their performance degrades substantially on test generation and Java/C++ tasks—evidenced by SWE-Agent attaining at most 20.9% pass rate on Java test generation—highlighting both the limitations of current approaches and the necessity of this benchmark.
📝 Abstract
LLM-powered coding agents are redefining how real-world software is developed. To drive the research towards better coding agents, we require challenging benchmarks that can rigorously evaluate the ability of such agents to perform various software engineering tasks. However, popular coding benchmarks such as HumanEval and SWE-Bench focus on narrowly scoped tasks such as competition programming and patch generation. In reality, software engineers have to handle a broader set of tasks for real-world software development. To address this gap, we propose OmniCode, a novel software engineering benchmark that contains a broader and more diverse set of task categories beyond code or patch generation. Overall, OmniCode contains 1794 tasks spanning three programming languages (Python, Java, and C++) and four key categories: bug fixing, test generation, code review fixing, and style fixing. In contrast to prior software engineering benchmarks, the tasks in OmniCode are (1) manually validated to eliminate ill-defined problems, and (2) synthetically crafted or recently curated to avoid data leakage issues, presenting a new framework for synthetically generating diverse software tasks from limited real-world data. We evaluate OmniCode with popular agent frameworks such as SWE-Agent and show that while they may perform well on bug fixing for Python, they fall short on tasks such as Test Generation and in languages such as C++ and Java. For instance, SWE-Agent achieves a maximum of 20.9% with DeepSeek-V3.1 on Java Test Generation tasks. OmniCode aims to serve as a robust benchmark and spur the development of agents that can perform well across different aspects of software development. Code and data are available at https://github.com/seal-research/OmniCode.