๐ค AI Summary
This paper addresses the challenge of rapid and precise multi-source airborne pollutant plume tracking in turbulent environments, particularly for emergency industrial leak scenarios. To tackle partial observability, high measurement noise, and strong environmental dynamics, we formulate multi-agent reinforcement learning as a partially observable Markov game (POMG)โthe first such formulation for this problemโand propose an action-observation history-driven, LSTM-enhanced Attentional Deep Distributed Recurrent Q-Network (ADDRQN) to enable coupled multi-source modeling and environment-adaptive decision-making. Leveraging a Gaussian plume model, we construct a realistic 3D simulation environment. Our method achieves accurate localization of multiple sources by exploring only 1.29% of the state space, substantially outperforming conventional gradient-based and bio-inspired approaches. The framework establishes a new paradigm for efficient, reliable, and real-time autonomous sensing in emergency response applications.
๐ Abstract
Industrial catastrophes like the Bhopal disaster (1984) and the Aliso Canyon gas leak (2015) demonstrate the urgent need for rapid and reliable plume tracing algorithms to protect public health and the environment. Traditional methods, such as gradient-based or biologically inspired approaches, often fail in realistic, turbulent conditions. To address these challenges, we present a Multi-Agent Reinforcement Learning (MARL) algorithm designed for localizing multiple airborne pollution sources using a swarm of small uncrewed aerial systems (sUAS). Our method models the problem as a Partially Observable Markov Game (POMG), employing a Long Short-Term Memory (LSTM)-based Action-specific Double Deep Recurrent Q-Network (ADDRQN) that uses full sequences of historical action-observation pairs, effectively approximating latent states. Unlike prior work, we use a general-purpose simulation environment based on the Gaussian Plume Model (GPM), incorporating realistic elements such as a three-dimensional environment, sensor noise, multiple interacting agents, and multiple plume sources. The incorporation of action histories as part of the inputs further enhances the adaptability of our model in complex, partially observable environments. Extensive simulations show that our algorithm significantly outperforms conventional approaches. Specifically, our model allows agents to explore only 1.29% of the environment to successfully locate pollution sources.