🤖 AI Summary
This work addresses the limitations of large language models (LLMs) in domain-specific software development, where they struggle to effectively leverage external domain knowledge and lack dedicated evaluation benchmarks. To bridge this gap, we introduce the first benchmark that explicitly requires the use of structured external knowledge corpora, spanning six emerging domains, eleven frameworks, and twenty-five real-world projects. The benchmark features multi-granularity tasks—from function- to project-level code generation and knowledge comprehension—paired with rigorous test suites and multiple-choice question answering, enabling comprehensive evaluation of domain adaptation techniques such as supervised fine-tuning (SFT), retrieval-augmented generation (RAG), and kNN-based language models. Experimental results reveal significant performance deficiencies across current state-of-the-art models, with the best performer (Claude Code) achieving only 34.2% accuracy, highlighting substantial gaps in existing approaches for acquiring and applying domain-specific knowledge.
📝 Abstract
Large language models (LLMs) excel at general programming but struggle with domain-specific software development, necessitating domain specialization methods for LLMs to learn and utilize domain knowledge and data. However, existing domain-specific code benchmarks cannot evaluate the effectiveness of domain specialization methods, which focus on assessing what knowledge LLMs possess rather than how they acquire and apply new knowledge, lacking explicit knowledge corpora for developing domain specialization methods. To this end, we present KOCO-BENCH, a novel benchmark designed for evaluating domain specialization methods in real-world software development. KOCO-BENCH contains 6 emerging domains with 11 software frameworks and 25 projects, featuring curated knowledge corpora alongside multi-granularity evaluation tasks including domain code generation (from function-level to project-level with rigorous test suites) and domain knowledge understanding (via multiple-choice Q&A). Unlike previous benchmarks that only provide test sets for direct evaluation, KOCO-BENCH requires acquiring and applying diverse domain knowledge (APIs, rules, constraints, etc.) from knowledge corpora to solve evaluation tasks. Our evaluations reveal that KOCO-BENCH poses significant challenges to state-of-the-art LLMs. Even with domain specialization methods (e.g., SFT, RAG, kNN-LM) applied, improvements remain marginal. Best-performing coding agent, Claude Code, achieves only 34.2%, highlighting the urgent need for more effective domain specialization methods. We release KOCO-BENCH, evaluation code, and baselines to advance further research at https://github.com/jiangxxxue/KOCO-bench.