DRL-Based Injection Molding Process Parameter Optimization for Adaptive and Profitable Production

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly optimizing product quality and profitability in injection molding under dynamic operating conditions and multi-source costs (e.g., resin, mold wear, time-of-use electricity pricing), this paper proposes a real-time profit-driven deep reinforcement learning framework. We innovatively model profitability as a learnable reward function that integrates a surrogate model for quality prediction with a time-of-use electricity-aware profit formulation. A hybrid SAC–PPO policy enables online adaptive decision-making, accommodating both transient process variations and seasonal shifts. Experiments demonstrate that the framework achieves optimal profitability under prescribed yield constraints, with inference speed 135× faster than genetic algorithms—satisfying millisecond-level control requirements in production lines. To the best of our knowledge, this is the first work to realize end-to-end, real-time co-optimization of quality and economic objectives in injection molding processes.

Technology Category

Application Category

📝 Abstract
Plastic injection molding remains essential to modern manufacturing. However, optimizing process parameters to balance product quality and profitability under dynamic environmental and economic conditions remains a persistent challenge. This study presents a novel deep reinforcement learning (DRL)-based framework for real-time process optimization in injection molding, integrating product quality and profitability into the control objective. A profit function was developed to reflect real-world manufacturing costs, incorporating resin, mold wear, and electricity prices, including time-of-use variations. Surrogate models were constructed to predict product quality and cycle time, enabling efficient offline training of DRL agents using soft actor-critic (SAC) and proximal policy optimization (PPO) algorithms. Experimental results demonstrate that the proposed DRL framework can dynamically adapt to seasonal and operational variations, consistently maintaining product quality while maximizing profit. Compared to traditional optimization methods such as genetic algorithms, the DRL models achieved comparable economic performance with up to 135x faster inference speeds, making them well-suited for real-time applications. The framework's scalability and adaptability highlight its potential as a foundation for intelligent, data-driven decision-making in modern manufacturing environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing injection molding parameters for quality and profit
Adapting to dynamic environmental and economic conditions
Real-time process optimization using deep reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

DRL-based real-time process optimization framework
Profit function integrating costs and quality
Surrogate models for efficient offline training
🔎 Similar Papers
No similar papers found.
J
Joon-Young Kim
Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea; Industrial Intelligence Research Group, AI/DX Center, Institute for Advanced Engineering (IAE), Yongin, Republic of Korea
J
Jecheon Yu
Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
H
Heekyu Kim
Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
Seunghwa Ryu
Seunghwa Ryu
KAIST Endowed Chair Professor of Mechanical Engineering
MechanicsMaterials ModelingAI Based DesignComposites