Diffusion-Modeled Reinforcement Learning for Carbon and Risk-Aware Microgrid Optimization

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the energy scheduling challenge in multi-microgrid systems—where high renewable penetration and strong uncertainty impede simultaneous achievement of low-carbon operation, robustness, and real-time responsiveness—this paper proposes a carbon- and risk-aware diffusion-enhanced deep reinforcement learning (DRL) method. Methodologically, we innovatively integrate diffusion models into DRL action distribution modeling, leveraging denoising generation to enhance policy expressiveness and environmental adaptability. We further pioneer joint modeling of carbon-emission constraints and operational risk sensitivity, enabling multi-objective optimization and explicit uncertainty characterization. Experimental results demonstrate that, compared with state-of-the-art DRL baselines, the proposed method reduces operational cost by 2.3%–30.1%, cuts carbon emissions by 28.7% relative to its non-carbon-aware variant, and significantly mitigates performance volatility—while maintaining strong generalizability and deployment flexibility.

Technology Category

Application Category

📝 Abstract
This paper introduces DiffCarl, a diffusion-modeled carbon- and risk-aware reinforcement learning algorithm for intelligent operation of multi-microgrid systems. With the growing integration of renewables and increasing system complexity, microgrid communities face significant challenges in real-time energy scheduling and optimization under uncertainty. DiffCarl integrates a diffusion model into a deep reinforcement learning (DRL) framework to enable adaptive energy scheduling under uncertainty and explicitly account for carbon emissions and operational risk. By learning action distributions through a denoising generation process, DiffCarl enhances DRL policy expressiveness and enables carbon- and risk-aware scheduling in dynamic and uncertain microgrid environments. Extensive experimental studies demonstrate that it outperforms classic algorithms and state-of-the-art DRL solutions, with 2.3-30.1% lower operational cost. It also achieves 28.7% lower carbon emissions than those of its carbon-unaware variant and reduces performance variability. These results highlight DiffCarl as a practical and forward-looking solution. Its flexible design allows efficient adaptation to different system configurations and objectives to support real-world deployment in evolving energy systems.
Problem

Research questions and friction points this paper is trying to address.

Optimizes real-time energy scheduling in multi-microgrid systems
Reduces carbon emissions and operational risk under uncertainty
Enhances policy expressiveness using diffusion-modeled reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-modeled RL for microgrid optimization
Carbon- and risk-aware adaptive scheduling
Denoising process enhances policy expressiveness
🔎 Similar Papers
No similar papers found.
Y
Yunyi Zhao
Information and Communications Technology Cluster, Singapore Institute of Technology, Singapore 138683, and Department of Electrical and Computer Engineering, National University of Singapore, Singapore 119077
W
Wei Zhang
Information and Communications Technology Cluster, Singapore Institute of Technology, Singapore 138683
Cheng Xiang
Cheng Xiang
National University of Singapore
Control systemscomputer visionmachine learningartificial intelligence
H
Hongyang Du
Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong SAR, China
D
Dusit Niyato
College of Computing and Data Science, Nanyang Technological University, Singapore 639798
Shuhua Gao
Shuhua Gao
Shandong University
computational intelligencemachine learningsystem modelingcontrol theory & automation