Policy Gradient Algorithms for Age-of-Information Cost Minimization

📅 2025-12-12
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the data freshness optimization problem in IoT networks under unknown transmission delays and general age-cost functions. Method: We propose two model-free policy gradient reinforcement learning algorithms—REINFORCE-based and Actor-Critic-based—that jointly minimize the time-average Age-of-Information (AoI) and communication cost. A novel dual-policy online co-updating mechanism is designed for continuous state-action spaces, requiring no system prior knowledge and ensuring strong generalizability. The algorithms operate under the generate-at-will information generation model. Contribution/Results: Both algorithms exhibit stable convergence and achieve time-average costs within ≤3% of the optimal. Compared to state-of-the-art methods, they support broader application scenarios, reduce total cost, and lower computational overhead by over one order of magnitude.

Technology Category

Application Category

📝 Abstract
Recent developments in cyber-physical systems have increased the importance of maximizing the freshness of the information about the physical environment. However, optimizing the access policies of Internet of Things devices to maximize the data freshness, measured as a function of the Age-of-Information (AoI) metric, is a challenging task. This work introduces two algorithms to optimize the information update process in cyber-physical systems operating under the generate-at-will model, by finding an online policy without knowing the characteristics of the transmission delay or the age cost function. The optimization seeks to minimize the time-average cost, which integrates the AoI at the receiver and the data transmission cost, making the approach suitable for a broad range of scenarios. Both algorithms employ policy gradient methods within the framework of model-free reinforcement learning (RL) and are specifically designed to handle continuous state and action spaces. Each algorithm minimizes the cost using a distinct strategy for deciding when to send an information update. Moreover, we demonstrate that it is feasible to apply the two strategies simultaneously, leading to an additional reduction in cost. The results demonstrate that the proposed algorithms exhibit good convergence properties and achieve a time-average cost within 3% of the optimal value, when the latter is computable. A comparison with other state-of-the-art methods shows that the proposed algorithms outperform them in one or more of the following aspects: being applicable to a broader range of scenarios, achieving a lower time-average cost, and requiring a computational cost at least one order of magnitude lower.
Problem

Research questions and friction points this paper is trying to address.

Optimizing IoT device access policies to maximize data freshness via Age-of-Information.
Minimizing time-average cost integrating AoI and transmission costs without prior knowledge.
Developing model-free RL algorithms for continuous spaces to reduce computational and cost overhead.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy gradient reinforcement learning for AoI minimization
Model-free algorithms for continuous state-action spaces
Simultaneous dual-strategy approach reduces cost further
🔎 Similar Papers
No similar papers found.