🤖 AI Summary
This study addresses the lack of systematic evaluation of the environmental impact of prompt engineering strategies for locally deployed small language models (SLMs) in AI-assisted programming. The authors introduce the first quantitative framework to assess six prompting methods—including Chain-of-Thought and ReAct—across 11 open-source SLMs (1B–34B parameters) using the HumanEval+ and MBPP+ benchmarks, measuring code generation accuracy, energy consumption, carbon emissions, and inference latency. Their findings reveal that accuracy and sustainability are decoupled: Chain-of-Thought maintains high accuracy while significantly reducing energy use; multi-sampling yields marginal accuracy gains at substantially higher carbon costs; and grid carbon intensity is the dominant factor in emissions. Based on these insights, the paper proposes design principles for environmentally sustainable prompting in green AI.
📝 Abstract
The shift from cloud-hosted Large Language Models (LLMs) to locally deployed open-source Small Language Models (SLMs) has democratized AI-assisted coding; however, it has also decentralized the environmental footprint of AI. While prompting strategies - such as Chain-of-Thought and ReAct - serve as external mechanisms for optimizing code generation without modifying model parameters, their impact on energy consumption and carbon emissions remains largely invisible to developers. This paper presents the first systematic empirical study investigating how different prompt engineering strategies in SLM-based code generation impact code generation accuracy alongside sustainability factors. We evaluate six prominent prompting strategies across 11 open-source models (ranging from 1B to 34B parameters) using the HumanEval+ and MBPP+ benchmarks. By measuring Pass@1 accuracy alongside energy (kWh), carbon emissions (kgCO2eq), and inference latency, we reveal that sustainability often decouples from accuracy, allowing significant environmental optimizations without sacrificing performance. Our findings indicate that Chain-of-Thought, being a simpler prompting technique, can provide a near-optimal balance between reasoning capability and energy efficiency. Conversely, multi-sampling strategies often incur disproportionate costs for marginal gains. Finally, we identify grid carbon intensity as the dominant factor in deployment-time emissions, highlighting the need for practitioners to consider regional energy profiles. This work provides a quantitative foundation for "green" prompt engineering, enabling developers to align high-performance code generation with ecological responsibility.