🤖 AI Summary
Existing large language models (LLMs) exhibit weak performance in deep learning (DL) code generation, primarily because prevailing benchmarks—such as DS-1000—focus narrowly on fragmented data preprocessing and postprocessing code, lacking systematic evaluation across full DL pipelines (e.g., preprocessing, modeling, training), diverse tasks (classification, regression, recommendation), and multimodal data (tabular, image, text).
Method: We introduce DeepBench, the first function-level, end-to-end DL pipeline benchmark. It features a novel orthogonal three-dimensional taxonomy—*stage–task–data modality*—a DL-specific error taxonomy comprising 12 high-frequency error categories, and rigorous quality assurance via human annotation and expert validation.
Results: Experiments reveal GPT-4o achieves only 31% accuracy on DeepBench—a 29-percentage-point drop from its DS-1000 score—with stage-wise performance gaps up to 7% and task-wise gaps up to 37%, quantitatively exposing previously unmeasured structural deficiencies in LLMs’ DL code generation capabilities.
📝 Abstract
Deep learning (DL) has revolutionized areas such as computer vision, natural language processing, and more. However, developing DL systems is challenging due to the complexity of DL workflows. Large Language Models (LLMs), such as GPT, Claude, Llama, Mistral, etc., have emerged as promising tools to assist in DL code generation, offering potential solutions to these challenges. Despite this, existing benchmarks such as DS-1000 are limited, as they primarily focus on small DL code snippets related to pre/post-processing tasks and lack a comprehensive coverage of the full DL pipeline, including different DL phases and input data types. To address this, we introduce DeepBench, a novel benchmark dataset designed for function-level DL code generation. DeepBench categorizes DL problems based on three key aspects: phases such as pre-processing, model construction, and training; tasks, including classification, regression, and recommendation; and input data types such as tabular, image, and text. GPT-4o -- the state-of-the-art LLM -- achieved 31% accuracy on DeepBench, significantly lower than its 60% on DS-1000. We observed similar difficulty for other LLMs (e.g., 28% vs. 54% for Claude, 21% vs. 41% for LLaMA, and 15% vs. 20% for Mistral). This result underscores DeepBench's greater complexity. We also construct a taxonomy of issues and bugs found in LLM-generated DL code, which highlights the distinct challenges that LLMs face when generating DL code compared to general code. Furthermore, our analysis also reveals substantial performance variations across categories, with differences of up to 7% among phases and 37% among tasks. These disparities suggest that DeepBench offers valuable insights into the LLMs' performance and areas for potential improvement in the DL domain.