Reinforcement Learning-based Task Offloading in the Internet of Wearable Things

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the tension between resource constraints (limited battery life and computational capability) of wearable IoT devices and the demands of compute-intensive, low-latency applications, this paper proposes a reinforcement learning–based intelligent task offloading framework. It formulates offloading decisions as a Markov decision process and employs model-free Q-learning to enable adaptive, online decision-making in dynamic edge environments without requiring prior knowledge. A joint energy-delay optimization platform is implemented in ns-3, incorporating a dual-objective reward function that balances system energy efficiency and task timeliness. Experimental results demonstrate that the proposed approach reduces average device energy consumption by 23.6% and task completion time by 18.4% compared to baseline strategies, significantly improving offloading efficiency and user experience. Its model-free, lightweight design ensures practical deployability on resource-constrained wearable endpoints.

Technology Category

Application Category

📝 Abstract
Over the years, significant contributions have been made by the research and industrial sectors to improve wearable devices towards the Internet of Wearable Things (IoWT) paradigm. However, wearables are still facing several challenges. Many stem from the limited battery power and insufficient computation resources available on wearable devices. On the other hand, with the popularity of smart wearables, there is a consistent increase in the development of new computationally intensive and latency-critical applications. In such a context, task offloading allows wearables to leverage the resources available on nearby edge devices to enhance the overall user experience. This paper proposes a framework for Reinforcement Learning (RL)-based task offloading in the IoWT. We formulate the task offloading process considering the tradeoff between energy consumption and task accomplishment time. Moreover, we model the task offloading problem as a Markov Decision Process (MDP) and utilize the Q-learning technique to enable the wearable device to make optimal task offloading decisions without prior knowledge. We evaluate the performance of the proposed framework through extensive simulations for various applications and system configurations conducted in the ns-3 network simulator. We also show how varying the main system parameters of the Q-learning algorithm affects the overall performance in terms of average task accomplishment time, average energy consumption, and percentage of tasks offloaded.
Problem

Research questions and friction points this paper is trying to address.

Optimizing task offloading for wearable devices with limited resources
Balancing energy consumption and task completion time tradeoffs
Enabling intelligent offloading decisions using reinforcement learning techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning optimizes wearable task offloading
Models offloading as Markov Decision Process using Q-learning
Balances energy consumption and task accomplishment time
🔎 Similar Papers
No similar papers found.