🤖 AI Summary
This study addresses the unquantified environmental impact of large language model (LLM)-assisted programming compared to traditional human coding. Method: We conduct the first empirical carbon footprint assessment using real-world Codeforces programming tasks, establishing a rigorous quantification framework integrating energy consumption monitoring, behavioral logging, computational complexity modeling, and carbon accounting—incorporating power usage effectiveness (PUE), GPU power draw, and regional grid emission factors. Results: LLM-assisted coding incurs, on average, 32.72× higher carbon emissions than manual coding; critically, emission overhead scales significantly with task complexity. The work pioneers the identification of AI coding’s hidden environmental cost, introduces a reproducible benchmark for green software engineering evaluation, and proposes a systematic optimization pathway toward low-carbon AI development—thereby providing foundational evidence and methodological support for sustainable AI engineering practice.
📝 Abstract
Large Language Models (LLM) have significantly transformed various domains, including software development. These models assist programmers in generating code, potentially increasing productivity and efficiency. However, the environmental impact of utilising these AI models is substantial, given their high energy consumption during both training and inference stages. This research aims to compare the energy consumption of manual software development versus an LLM-assisted approach, using Codeforces as a simulation platform for software development. The goal is to quantify the environmental impact and propose strategies for minimising the carbon footprint of using LLM in software development. Our results show that the LLM-assisted code generation leads on average to 32.72 higher carbon footprint than the manual one. Moreover, there is a significant correlation between task complexity and the difference in the carbon footprint of the two approaches.