Dynamic Collaborative Material Distribution System for Intelligent Robots In Smart Manufacturing

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time navigation for dynamic multi-robot systems in smart manufacturing—specifically the dynamic multi-source single-destination (DMS-SD) scenario—remains challenging due to insufficient long-term experience learning, overreliance on limited historical trajectories, high computational overhead, and inability to meet millisecond-level response requirements. Method: This paper proposes a lightweight edge-deployable deep reinforcement learning (DRL) framework. It introduces a goal-guided reward function, integrates model pruning and quantization-based compression, and leverages edge-cooperative inference. Contribution/Results: To the best of our knowledge, this is the first work to achieve real-time deployment of a DRL model on resource-constrained IoT devices and mobile terminals. Experiments demonstrate path-planning latency reduced to the millisecond level—100× faster than conventional enumeration-based methods—while maintaining high energy efficiency and robustness under stringent resource constraints, significantly enhancing both real-time responsiveness and energy efficiency in production-line material delivery.

Technology Category

Application Category

📝 Abstract
The collaboration and interaction of multiple robots have become integral aspects of smart manufacturing. Effective planning and management play a crucial role in achieving energy savings and minimising overall costs. This paper addresses the real-time Dynamic Multiple Sources to Single Destination (DMS-SD) navigation problem, particularly with a material distribution case for multiple intelligent robots in smart manufacturing. Enumerated solutions, such as in cite{xiao2022efficient}, tackle the problem by generating as many optimal or near-optimal solutions as possible but do not learn patterns from the previous experience, whereas the method in cite{xiao2023collaborative} only uses limited information from the earlier trajectories. Consequently, these methods may take a considerable amount of time to compute results on large maps, rendering real-time operations impractical. To overcome this challenge, we propose a lightweight Deep Reinforcement Learning (DRL) method to address the DMS-SD problem. The proposed DRL method can be efficiently trained and rapidly converges to the optimal solution using the designed target-guided reward function. A well-trained DRL model significantly reduces the computation time for the next movement to a millisecond level, which improves the time up to 100 times in our experiments compared to the enumerated solutions. Moreover, the trained DRL model can be easily deployed on lightweight devices in smart manufacturing, such as Internet of Things devices and mobile phones, which only require limited computational resources.
Problem

Research questions and friction points this paper is trying to address.

Solves real-time Dynamic Multiple Sources to Single Destination navigation for robots
Addresses inefficiency in learning from past robot trajectories in smart manufacturing
Reduces computation time for robot movement planning using DRL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight Deep Reinforcement Learning method
Target-guided reward function for rapid convergence
Deployable on IoT devices and mobile phones
🔎 Similar Papers
No similar papers found.
Ziren Xiao
Ziren Xiao
PDRA, Loughborough University
reinforcemeng learningcomputer networkintelligent vehiclestask allocation
R
Ruxin Xiao
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou, 215123, China
C
Chang Liu
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou, 215123, China
Xinheng Wang
Xinheng Wang
Xi'an Jiaotong-Liverpool University
Intelligent and Connected SystemsAcoustic LocalizationCommunications and SensingRobotics