Analyzing Prominent LLMs: An Empirical Study of Performance and Complexity in Solving LeetCode Problems

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite growing adoption of large language models (LLMs) in programming assistance, systematic empirical comparisons of their algorithmic efficiency and robustness across problem difficulty levels remain scarce. Method: This work conducts the first comprehensive, cross-model evaluation of ChatGPT, GitHub Copilot, Gemini, and DeepSeek on 150 LeetCode coding tasks spanning easy to hard difficulty levels, assessing Java/Python code generation quality via execution time, memory consumption, theoretical time/space complexity, and solution success rate. Contribution/Results: ChatGPT achieves the highest overall performance—exhibiting both superior success rates and lower average algorithmic complexity. Copilot and DeepSeek show pronounced performance degradation with increasing problem difficulty, while Gemini frequently requires multiple attempts to solve hard problems. These findings provide empirically grounded guidance for context-aware LLM selection in software development workflows, highlighting critical trade-offs between correctness, efficiency, and reliability across models and task complexities.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) like ChatGPT, Copilot, Gemini, and DeepSeek are transforming software engineering by automating key tasks, including code generation, testing, and debugging. As these models become integral to development workflows, a systematic comparison of their performance is essential for optimizing their use in real world applications. This study benchmarks these four prominent LLMs on one hundred and fifty LeetCode problems across easy, medium, and hard difficulties, generating solutions in Java and Python. We evaluate each model based on execution time, memory usage, and algorithmic complexity, revealing significant performance differences. ChatGPT demonstrates consistent efficiency in execution time and memory usage, while Copilot and DeepSeek show variability as task complexity increases. Gemini, although effective on simpler tasks, requires more attempts as problem difficulty rises. Our findings provide actionable insights into each model's strengths and limitations, offering guidance for developers selecting LLMs for specific coding tasks and providing insights on the performance and complexity of GPT-like generated solutions.
Problem

Research questions and friction points this paper is trying to address.

Compare performance of four LLMs on LeetCode problems
Evaluate LLMs by execution time, memory usage, complexity
Identify strengths and limitations of each LLM for coding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking four LLMs on LeetCode problems
Evaluating execution time and memory usage
Comparing performance across difficulty levels
🔎 Similar Papers
No similar papers found.
E
Everton Guimaraes
EASER, Eng. Division, Penn State University, Malvern, USA
Nathalia Nascimento
Nathalia Nascimento
Assistant Professor, Penn State University
Software EngineeringArtificial IntelligenceInternet of ThingsLLM agentE-nose
A
Asish Nelapati
EASER, Eng. Division, Penn State University, Malvern, USA
C
Chandan Shivalingaiah
EASER, Eng. Division, Penn State University, Malvern, USA