🤖 AI Summary
This study presents the first systematic evaluation of large language models’ (LLMs) ability to generate efficient, deployable C implementations of graph algorithms under runtime and memory constraints, emphasizing performance and integrability. Addressing prior work’s focus on functional correctness or high-level languages (e.g., Python), we propose a dual-path evaluation framework: (1) performance-beyond testing—measuring speedup over human-written baselines on tasks like triangle counting; and (2) algorithm integration testing—assessing plug-and-play compatibility within real-world graph analytics pipelines. We evaluate eight state-of-the-art models, including Claude, ChatGPT, and Gemini. Results show Claude Sonnet 4 Extended achieves the highest performance, generating triangle-counting code that outperforms hand-optimized implementations. While models consistently optimize existing algorithms, they rarely invent novel ones. To foster reproducible research, we open-source all prompts, generated code, and benchmarking scripts.
📝 Abstract
Large Language Models (LLMs) are increasingly used to automate software development, yet most prior evaluations focus on functional correctness or high-level languages such as Python. We present the first systematic study of LLMs' ability to generate efficient C implementations of graph-analysis routines--code that must satisfy the stringent runtime and memory constraints. Eight state-of-the-art models (OpenAI ChatGPT o3 and o4-mini-high, Anthropic Claude 4 Sonnet and Sonnet Extended, Google Gemini 2.5 Flash and Pro, xAI Grok 3-Think, and DeepSeek DeepThink R1) are benchmarked by two distinct approaches. The first approach checks the ability of LLMs in generating an algorithm outperforming other present algorithms in the benchmark. The second approach evaluates the ability of LLMs to generate graph algorithms for integration into the benchmark. Results show that Claude Sonnet 4 Extended achieves the best result in the case of ready-to-use code generation and efficiency, outperforming human-written baselines in triangle counting. The study confirms that contemporary LLMs excel at optimizing and integrating established algorithms but not inventing novel techniques. We provide prompts, the first approach's generated code, and measurement scripts to foster reproducible research.