🤖 AI Summary
Despite growing adoption of large language models (LLMs) in programming assistance, systematic empirical comparisons of their algorithmic efficiency and robustness across problem difficulty levels remain scarce.
Method: This work conducts the first comprehensive, cross-model evaluation of ChatGPT, GitHub Copilot, Gemini, and DeepSeek on 150 LeetCode coding tasks spanning easy to hard difficulty levels, assessing Java/Python code generation quality via execution time, memory consumption, theoretical time/space complexity, and solution success rate.
Contribution/Results: ChatGPT achieves the highest overall performance—exhibiting both superior success rates and lower average algorithmic complexity. Copilot and DeepSeek show pronounced performance degradation with increasing problem difficulty, while Gemini frequently requires multiple attempts to solve hard problems. These findings provide empirically grounded guidance for context-aware LLM selection in software development workflows, highlighting critical trade-offs between correctness, efficiency, and reliability across models and task complexities.
📝 Abstract
Large Language Models (LLMs) like ChatGPT, Copilot, Gemini, and DeepSeek are transforming software engineering by automating key tasks, including code generation, testing, and debugging. As these models become integral to development workflows, a systematic comparison of their performance is essential for optimizing their use in real world applications. This study benchmarks these four prominent LLMs on one hundred and fifty LeetCode problems across easy, medium, and hard difficulties, generating solutions in Java and Python. We evaluate each model based on execution time, memory usage, and algorithmic complexity, revealing significant performance differences. ChatGPT demonstrates consistent efficiency in execution time and memory usage, while Copilot and DeepSeek show variability as task complexity increases. Gemini, although effective on simpler tasks, requires more attempts as problem difficulty rises. Our findings provide actionable insights into each model's strengths and limitations, offering guidance for developers selecting LLMs for specific coding tasks and providing insights on the performance and complexity of GPT-like generated solutions.