🤖 AI Summary
This work addresses the decentralized heterogeneous resource allocation problem in multi-agent systems. We propose LGTC-IPPO, a decentralized reinforcement learning framework featuring a novel dynamic clustering consensus mechanism that enables agents to autonomously form teams and jointly optimize decisions. The framework integrates distributed consensus algorithms, dynamic graph neural network–based clustering, and multi-timescale modeling of resource states, thereby facilitating local-consensus-driven resource reallocation—reducing reliance on global information and enabling, for the first time, real-time rescheduling of discharge-type decaying resources. Experiments across diverse scales and resource distributions demonstrate a 37% improvement in reward stability, a 2.1× increase in coordination efficiency over the IPPO baseline, and robust scalability to hundreds of agents.
📝 Abstract
This paper addresses the challenge of allocating heterogeneous resources among multiple agents in a decentralized manner. Our proposed method, LGTC-IPPO, builds upon Independent Proximal Policy Optimization (IPPO) by integrating dynamic cluster consensus, a mechanism that allows agents to form and adapt local sub-teams based on resource demands. This decentralized coordination strategy reduces reliance on global information and enhances scalability. We evaluate LGTC-IPPO against standard multi-agent reinforcement learning baselines and a centralized expert solution across a range of team sizes and resource distributions. Experimental results demonstrate that LGTC-IPPO achieves more stable rewards, better coordination, and robust performance even as the number of agents or resource types increases. Additionally, we illustrate how dynamic clustering enables agents to reallocate resources efficiently also for scenarios with discharging resources.