Scalable Multi-Agent Path Finding using Collision-Aware Dynamic Alert Mask and a Hybrid Execution Strategy

๐Ÿ“… 2025-10-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Centralized multi-agent pathfinding (MAPF) methods suffer from prohibitive computational overhead, while decentralized approaches often yield low-quality solutions. Method: This paper proposes a hybrid framework integrating decentralized planning with lightweight centralized coordination. Its core innovation is a dynamic conflict-aware alert masking mechanism: only critical inter-agent conflict information is exchanged locally, drastically reducing communication overhead; a central coordinator generates static conflict grids or short-horizon conflict trajectories to provide lightweight global guidance for reinforcement learningโ€“driven distributed path planning. Contribution/Results: The method ensures collision-free, high-quality paths in high-density scenarios while maintaining scalability, safety, and computational efficiency. In large-scale environments, it significantly reduces both inter-agent information exchange and computational burden compared to baseline centralized solvers such as Conflict-Based Search (CBS).

Technology Category

Application Category

๐Ÿ“ Abstract
Multi-agent pathfinding (MAPF) remains a critical problem in robotics and autonomous systems, where agents must navigate shared spaces efficiently while avoiding conflicts. Traditional centralized algorithms that have global information, such as Conflict-Based Search (CBS), provide high-quality solutions but become computationally expensive in large-scale scenarios due to the combinatorial explosion of conflicts that need resolution. Conversely, distributed approaches that have local information, particularly learning-based methods, offer better scalability by operating with relaxed information availability, yet often at the cost of solution quality. To address these limitations, we propose a hybrid framework that combines decentralized path planning with a lightweight centralized coordinator. Our framework leverages reinforcement learning (RL) for decentralized planning, enabling agents to adapt their planning based on minimal, targeted alerts--such as static conflict-cell flags or brief conflict tracks--that are dynamically shared information from the central coordinator for effective conflict resolution. We empirically study the effect of the information available to an agent on its planning performance. Our approach reduces the inter-agent information sharing compared to fully centralized and distributed methods, while still consistently finding feasible, collision-free solutions--even in large-scale scenarios having higher agent counts.
Problem

Research questions and friction points this paper is trying to address.

Addressing scalability limitations in multi-agent pathfinding algorithms
Reducing computational costs while maintaining collision-free path solutions
Balancing centralized coordination with decentralized planning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid framework with decentralized planning and centralized coordinator
Reinforcement learning for adaptive path planning with minimal alerts
Dynamic alert masks and conflict tracks for collision resolution
๐Ÿ”Ž Similar Papers
No similar papers found.