Energy-Efficient Routing Protocol in Vehicular Opportunistic Networks: A Dynamic Cluster-based Routing Using Deep Reinforcement Learning

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In vehicular opportunistic networks, highly dynamic topologies, unpredictable contacts, and resource constraints lead to low transmission reliability, high end-to-end delay, and short node lifetime. To address these challenges, this paper proposes a deep reinforcement learning (DRL)-based dynamic clustering routing method. Innovatively integrating the Actor-Critic framework with domain-inspired heuristic functions, the approach enables adaptive overlapping cluster reconfiguration and real-time optimal relay selection—balancing connectivity enhancement and energy load balancing. Built upon the Store-Carry-Forward paradigm, it effectively suppresses redundant forwarding. Experimental results demonstrate a 10% improvement in delivery ratio, a 28.5% reduction in end-to-end delay, a 7% increase in throughput, and a 30% decrease in data transmission hops. Furthermore, node lifetime is extended by 21%, overall energy consumption reduced by 17%, and node active time increased by 15%.

Technology Category

Application Category

📝 Abstract
Opportunistic Networks (OppNets) employ the Store-Carry-Forward (SCF) paradigm to maintain communication during intermittent connectivity. However, routing performance suffers due to dynamic topology changes, unpredictable contact patterns, and resource constraints including limited energy and buffer capacity. These challenges compromise delivery reliability, increase latency, and reduce node longevity in highly dynamic environments. This paper proposes Cluster-based Routing using Deep Reinforcement Learning (CR-DRL), an adaptive routing approach that integrates an Actor-Critic learning framework with a heuristic function. CR-DRL enables real-time optimal relay selection and dynamic cluster overlap adjustment to maintain connectivity while minimizing redundant transmissions and enhancing routing efficiency. Simulation results demonstrate significant improvements over state-of-the-art baselines. CR-DRL extends node lifetimes by up to 21%, overall energy use is reduced by 17%, and nodes remain active for 15% longer. Communication performance also improves, with up to 10% higher delivery ratio, 28.5% lower delay, 7% higher throughput, and data requiring 30% fewer transmission steps across the network.
Problem

Research questions and friction points this paper is trying to address.

Dynamic topology changes and resource constraints degrade routing performance
Unpredictable contact patterns compromise delivery reliability and increase latency
Limited energy and buffer capacity reduce node longevity in networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic cluster-based routing using deep reinforcement learning
Actor-Critic framework with heuristic function integration
Real-time relay selection and cluster overlap adjustment
🔎 Similar Papers
No similar papers found.
M
Meisam Sharifi Sani
School of Electrical, Computer and Telecommunication Engineering, University of Wollongong, Wollongong, NSW 2522, Australia
S
Saeid Iranmanesh
School of Electrical, Computer and Telecommunication Engineering, University of Wollongong, Wollongong, NSW 2522, Australia
Raad Raad
Raad Raad
University of Wollongong
Sensor NetworksCubeSatCommunicationsAntenna designP2P
F
Faisel Tubbal
School of Electrical, Computer and Telecommunication Engineering, University of Wollongong, Wollongong, NSW 2522, Australia