Novel Multi-Agent Action Masked Deep Reinforcement Learning for General Industrial Assembly Lines Balancing Problems

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
General industrial assembly line balancing involves joint optimization of task assignment, resource scheduling, and multi-constraint satisfaction, yet existing approaches struggle to balance solution accuracy and real-time responsiveness. Method: This paper proposes a multi-agent deep reinforcement learning framework grounded in a Markov decision process (MDP) formulation. It introduces an action masking mechanism to enforce action feasibility and adopts a centralized training with decentralized execution (CTDE) architecture to compress the state-action space, thereby enhancing scalability and training efficiency. Contribution/Results: Unlike conventional integer programming and heuristic methods, the proposed approach operates without assumptions about assembly line topology and achieves significantly faster convergence to high-quality solutions while supporting real-time optimization. Numerical experiments demonstrate superior solution quality and computational efficiency compared to established baselines.

Technology Category

Application Category

📝 Abstract
Efficient planning of activities is essential for modern industrial assembly lines to uphold manufacturing standards, prevent project constraint violations, and achieve cost-effective operations. While exact solutions to such challenges can be obtained through Integer Programming (IP), the dependence of the search space on input parameters often makes IP computationally infeasible for large-scale scenarios. Heuristic methods, such as Genetic Algorithms, can also be applied, but they frequently produce suboptimal solutions in extensive cases. This paper introduces a novel mathematical model of a generic industrial assembly line formulated as a Markov Decision Process (MDP), without imposing assumptions on the type of assembly line a notable distinction from most existing models. The proposed model is employed to create a virtual environment for training Deep Reinforcement Learning (DRL) agents to optimize task and resource scheduling. To enhance the efficiency of agent training, the paper proposes two innovative tools. The first is an action-masking technique, which ensures the agent selects only feasible actions, thereby reducing training time. The second is a multi-agent approach, where each workstation is managed by an individual agent, as a result, the state and action spaces were reduced. A centralized training framework with decentralized execution is adopted, offering a scalable learning architecture for optimizing industrial assembly lines. This framework allows the agents to learn offline and subsequently provide real-time solutions during operations by leveraging a neural network that maps the current factory state to the optimal action. The effectiveness of the proposed scheme is validated through numerical simulations, demonstrating significantly faster convergence to the optimal solution compared to a comparable model-based approach.
Problem

Research questions and friction points this paper is trying to address.

Optimizing task scheduling in industrial assembly lines
Reducing computational complexity in large-scale scenarios
Enhancing training efficiency for multi-agent DRL systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent DRL for assembly line optimization
Action-masking to ensure feasible actions
Centralized training with decentralized execution
🔎 Similar Papers
No similar papers found.