Evaluating Efficiency and Novelty of LLM-Generated Code for Graph Analysis

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study presents the first systematic evaluation of large language models’ (LLMs) ability to generate efficient, deployable C implementations of graph algorithms under runtime and memory constraints, emphasizing performance and integrability. Addressing prior work’s focus on functional correctness or high-level languages (e.g., Python), we propose a dual-path evaluation framework: (1) performance-beyond testing—measuring speedup over human-written baselines on tasks like triangle counting; and (2) algorithm integration testing—assessing plug-and-play compatibility within real-world graph analytics pipelines. We evaluate eight state-of-the-art models, including Claude, ChatGPT, and Gemini. Results show Claude Sonnet 4 Extended achieves the highest performance, generating triangle-counting code that outperforms hand-optimized implementations. While models consistently optimize existing algorithms, they rarely invent novel ones. To foster reproducible research, we open-source all prompts, generated code, and benchmarking scripts.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly used to automate software development, yet most prior evaluations focus on functional correctness or high-level languages such as Python. We present the first systematic study of LLMs' ability to generate efficient C implementations of graph-analysis routines--code that must satisfy the stringent runtime and memory constraints. Eight state-of-the-art models (OpenAI ChatGPT o3 and o4-mini-high, Anthropic Claude 4 Sonnet and Sonnet Extended, Google Gemini 2.5 Flash and Pro, xAI Grok 3-Think, and DeepSeek DeepThink R1) are benchmarked by two distinct approaches. The first approach checks the ability of LLMs in generating an algorithm outperforming other present algorithms in the benchmark. The second approach evaluates the ability of LLMs to generate graph algorithms for integration into the benchmark. Results show that Claude Sonnet 4 Extended achieves the best result in the case of ready-to-use code generation and efficiency, outperforming human-written baselines in triangle counting. The study confirms that contemporary LLMs excel at optimizing and integrating established algorithms but not inventing novel techniques. We provide prompts, the first approach's generated code, and measurement scripts to foster reproducible research.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs' efficiency in generating C graph-analysis code
Compare LLM-generated code with human-written baselines
Assess LLMs' ability to innovate novel graph algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate efficient C graph-analysis code
Benchmarking eight state-of-the-art LLM models
Claude Sonnet 4 Extended outperforms human baselines
🔎 Similar Papers
No similar papers found.
A
Atieh Barati Nia
Department of Data Science, New Jersey Institute of Technology, Newark, NJ, USA
M
Mohammad Dindoost
Department of Data Science, New Jersey Institute of Technology, Newark, NJ, USA
David A. Bader
David A. Bader
Distinguished Professor, New Jersey Institute of Technology
data sciencehigh performance computingcybersecuritymassive-scale analyticscomputational genomics