🤖 AI Summary
Climate significantly impacts energy consumption and carbon emissions of AI infrastructure, yet the feasibility and carbon benefits of deploying sustainable LLM inference data centers in arid desert environments—particularly in the Middle East—remain unexplored.
Method: This study conducts the first multinational empirical comparison quantifying energy efficiency and carbon footprint differences across climate zones (desert, temperate, tropical) for LLM inference tasks. Experiments employ the DeepSeek Coder 1.3B model on the HumanEval benchmark, with fine-grained energy and carbon emission tracking via CodeCarbon.
Contribution/Results: Under high-temperature, low-humidity conditions—coupled with abundant renewable energy and advanced liquid cooling—desert data centers reduce per-inference carbon intensity by 23–37% relative to other climates. However, challenges persist in thermal redundancy management and grid-level green electricity stability. The work introduces a “climate-aware AI infrastructure placement” paradigm, providing empirical evidence and a decision-making framework for globally sustainable AI compute deployment.
📝 Abstract
As the Middle East emerges as a strategic hub for artificial intelligence (AI) infrastructure, the feasibility of deploying sustainable datacenters in desert environments has become a topic of growing relevance. This paper presents an empirical study analyzing the energy consumption and carbon footprint of large language model (LLM) inference across four countries: the United Arab Emirates, Iceland, Germany, and the United States of America using DeepSeek Coder 1.3B and the HumanEval dataset on the task of code generation. We use the CodeCarbon library to track energy and carbon emissions andcompare geographical trade-offs for climate-aware AI deployment. Our findings highlight both the challenges and potential of datacenters in desert regions and provide a balanced outlook on their role in global AI expansion.