Hybrid Belief Reinforcement Learning for Efficient Coordinated Spatial Exploration

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing spatial pattern learning and efficient trajectory planning in multi-agent collaborative exploration and service tasks within unknown environments, where existing approaches often suffer from limited sample efficiency or poor policy adaptability. The authors propose a hybrid belief-based reinforcement learning framework that first constructs a shared spatial belief using a Log-Gaussian Cox process and employs a Pathwise Mutual Information planner to generate information-driven exploration trajectories. Subsequently, control is transferred to Soft Actor-Critic agents, initialized via a novel dual-channel knowledge transfer mechanism for policy warm-starting. By innovatively integrating Bayesian spatial modeling with deep reinforcement learning—and incorporating a variance-normalized overlap penalty—the method achieves a 10.8% higher cumulative reward and 38% faster convergence than baseline approaches in multi-UAV wireless service tasks, with ablation studies confirming the superiority of dual-channel over single-channel knowledge transfer.

Technology Category

Application Category

📝 Abstract
Coordinating multiple autonomous agents to explore and serve spatially heterogeneous demand requires jointly learning unknown spatial patterns and planning trajectories that maximize task performance. Pure model-based approaches provide structured uncertainty estimates but lack adaptive policy learning, while deep reinforcement learning often suffers from poor sample efficiency when spatial priors are absent. This paper presents a hybrid belief-reinforcement learning (HBRL) framework to address this gap. In the first phase, agents construct spatial beliefs using a Log-Gaussian Cox Process (LGCP) and execute information-driven trajectories guided by a Pathwise Mutual Information (PathMI) planner with multi-step lookahead. In the second phase, trajectory control is transferred to a Soft Actor-Critic (SAC) agent, warm-started through dual-channel knowledge transfer: belief state initialization supplies spatial uncertainty, and replay buffer seeding provides demonstration trajectories generated during LGCP exploration. A variance-normalized overlap penalty enables coordinated coverage through shared belief state, permitting cooperative sensing in high-uncertainty regions while discouraging redundant coverage in well-explored areas. The framework is evaluated on a multi-UAV wireless service provisioning task. Results show 10.8% higher cumulative reward and 38% faster convergence over baselines, with ablation studies confirming that dual-channel transfer outperforms either channel alone.
Problem

Research questions and friction points this paper is trying to address.

spatial exploration
multi-agent coordination
sample efficiency
spatial heterogeneity
autonomous agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Belief-Reinforcement Learning
Log-Gaussian Cox Process
Pathwise Mutual Information
Dual-channel Knowledge Transfer
Coordinated Spatial Exploration
🔎 Similar Papers
No similar papers found.