🤖 AI Summary
This study investigates the impact of prompt linguistic complexity on inference energy consumption and environmental sustainability in large language models (LLMs) for software engineering tasks. We propose “Green Prompt Engineering,” a novel paradigm that explicitly incorporates linguistic readability as a core sustainability dimension in prompt design. Using an open-source small language model, we systematically evaluate the joint effect of prompts with varying readability levels on both energy consumption (kWh) and F1 score in a requirements classification task. Experimental results demonstrate that reducing prompt complexity yields substantial energy savings—up to 37% lower inference energy—while incurring only marginal performance degradation (F1 drop ≤1.2%). This work provides actionable, low-overhead strategies for sustainable AI deployment and establishes a new interdisciplinary research direction at the intersection of prompt engineering and green computing.
📝 Abstract
Language Models are increasingly applied in software engineering, yet their inference raises growing environmental concerns. Prior work has examined hardware choices and prompt length, but little attention has been paid to linguistic complexity as a sustainability factor. This paper introduces Green Prompt Engineering, framing linguistic complexity as a design dimension that can influence energy consumption and performance. We conduct an empirical study on requirement classification using open-source Small Language Models, varying the readability of prompts. Our results reveal that readability affects environmental sustainability and performance, exposing trade-offs between them. For practitioners, simpler prompts can reduce energy costs without a significant F1-score loss; for researchers, it opens a path toward guidelines and studies on sustainable prompt design within the Green AI agenda.