๐ค AI Summary
To address carbon emissions arising from the high energy consumption of large language models (LLMs), this work proposes a full-stack, co-designed sustainable AI paradigm that systematically integrates data lifecycle management with system architecture optimizationโthe first such effort. Methodologically, it introduces green data sampling, sparsification-based preprocessing, low-precision training, dynamic inference scheduling, and hardware-aware compilation. Empirical evaluations across diverse scenarios demonstrate 30โ50% reduction in computational energy consumption, with model accuracy degradation constrained to within 2%. The study establishes a technically viable, low-carbon AI pathway that balances energy efficiency, accuracy, and practicality. Furthermore, it delivers a reusable methodological framework and engineering practice guidelines, providing both theoretical foundations and actionable support for the green deployment of LLMs.
๐ Abstract
Sustainable AI is a subfield of AI for concerning developing and using AI systems in ways of aiming to reduce environmental impact and achieve sustainability. Sustainable AI is increasingly important given that training of and inference with AI models such as large langrage models are consuming a large amount of computing power. In this article, we discuss current issues, opportunities and example solutions for addressing these issues, and future challenges to tackle, from the data and system perspectives, related to data acquisition, data processing, and AI model training and inference.