Adaptive Vision-Based Coverage Optimization in Mobile Wireless Sensor Networks: A Multi-Agent Deep Reinforcement Learning Approach

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of conventional dynamic coverage optimization in mobile wireless sensor networks (MWSNs)—namely, its reliance on predefined strategies and inability to adapt autonomously to node failures and energy depletion—this paper proposes a vision-guided multi-agent deep reinforcement learning (MARL) framework. The method innovatively integrates LED-state recognition with a lightweight visual coverage assessment module, enabling autonomous collaborative localization and real-time reconfiguration without prior deployment knowledge. It supports in-situ, dynamic spatial redistribution of sensors, significantly enhancing environmental responsiveness and system robustness. Experimental results demonstrate that, compared to traditional static deployment approaches, the proposed method improves coverage efficiency by 26.5%, reduces energy consumption by 32%, decreases coverage redundancy by 22%, and extends network lifetime by 45%.

Technology Category

Application Category

📝 Abstract
Traditional Wireless Sensor Networks (WSNs) typically rely on pre-analysis of the target area, network size, and sensor coverage to determine initial deployment. This often results in significant overlap to ensure continued network operation despite sensor energy depletion. With the emergence of Mobile Wireless Sensor Networks (MWSNs), issues such as sensor failure and static coverage limitations can be more effectively addressed through mobility. This paper proposes a novel deployment strategy in which mobile sensors autonomously position themselves to maximize area coverage, eliminating the need for predefined policies. A live camera system, combined with deep reinforcement learning (DRL), monitors the network by detecting sensor LED indicators and evaluating real-time coverage. Rewards based on coverage efficiency and sensor movement are computed at each learning step and shared across the network through a Multi-Agent Reinforcement Learning (MARL) framework, enabling decentralized, cooperative sensor control. Key contributions include a vision-based, low-cost coverage evaluation method; a scalable MARL-DRL framework for autonomous deployment; and a self-reconfigurable system that adjusts sensor positioning in response to energy depletion. Compared to traditional distance-based localization, the proposed method achieves a 26.5% improvement in coverage, a 32% reduction in energy consumption, and a 22% decrease in redundancy, extending network lifetime by 45%. This approach significantly enhances adaptability, energy efficiency, and robustness in MWSNs, offering a practical deployment solution within the IoT framework.
Problem

Research questions and friction points this paper is trying to address.

Optimizing mobile sensor deployment for maximum area coverage autonomously
Reducing energy consumption and redundancy in wireless sensor networks
Enabling decentralized cooperative control through multi-agent reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-based LED detection for real-time coverage evaluation
Multi-agent deep reinforcement learning for decentralized sensor control
Autonomous self-reconfiguration to optimize coverage and energy efficiency
🔎 Similar Papers
No similar papers found.
P
Parham Soltani
School of Electrical Engineering, Iran University of Science & Technology, Tehran, Iran
M
Mehrshad Eskandarpour
School of Electrical Engineering, Iran University of Science & Technology, Tehran, Iran
S
Sina Heidari
School of Electrical Engineering, Iran University of Science & Technology, Tehran, Iran
F
Farnaz Alizadeh
School of Electrical Engineering, Iran University of Science & Technology, Tehran, Iran
Hossein Soleimani
Hossein Soleimani
Assistant professor at Iran University of Science and Technology
Cellular networks5GLTESensor NetworksDeep learning