Trustworthy and Explainable Deep Reinforcement Learning for Safe and Energy-Efficient Process Control: A Use Case in Industrial Compressed Air Systems

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of simultaneously ensuring safety, energy efficiency, and algorithmic trustworthiness in industrial compressed air system control, this paper proposes a trustworthy and interpretable deep reinforcement learning (DRL) controller. Methodologically, we design a multi-level interpretability pipeline integrating input perturbation robustness testing, gradient sensitivity analysis, and SHAP-based feature attribution to guarantee physically plausible decisions, prediction-driven control, and strict adherence to safety constraints. Our key contribution is the first deep integration of interpretability mechanisms into a closed-loop DRL industrial controller—without requiring explicit physical modeling. Experimental results demonstrate that, compared to conventional industrial controllers, the proposed approach significantly reduces overpressure events and achieves approximately 4% reduction in overall energy consumption. Attribution analysis further confirms that pressure and load forecasts serve as the primary decision drivers.

Technology Category

Application Category

📝 Abstract
This paper presents a trustworthy reinforcement learning approach for the control of industrial compressed air systems. We develop a framework that enables safe and energy-efficient operation under realistic boundary conditions and introduce a multi-level explainability pipeline combining input perturbation tests, gradient-based sensitivity analysis, and SHAP (SHapley Additive exPlanations) feature attribution. An empirical evaluation across multiple compressor configurations shows that the learned policy is physically plausible, anticipates future demand, and consistently respects system boundaries. Compared to the installed industrial controller, the proposed approach reduces unnecessary overpressure and achieves energy savings of approximately 4,% without relying on explicit physics models. The results further indicate that system pressure and forecast information dominate policy decisions, while compressor-level inputs play a secondary role. Overall, the combination of efficiency gains, predictive behavior, and transparent validation supports the trustworthy deployment of reinforcement learning in industrial energy systems.
Problem

Research questions and friction points this paper is trying to address.

Develop trustworthy reinforcement learning for industrial compressed air control
Ensure safe and energy-efficient operation under realistic conditions
Provide explainable policy decisions through multi-level analysis methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trustworthy reinforcement learning for industrial compressed air control
Multi-level explainability pipeline with SHAP and sensitivity analysis
Energy savings without explicit physics models
🔎 Similar Papers
No similar papers found.