🤖 AI Summary
This study addresses the real-time monitoring of an N-state Markov source over a wireless channel under sampling constraints, aiming to minimize the Age of Incorrect Information (AoII) to enhance state estimation accuracy and reduce erroneous action rates. The work proposes a semantic- and threshold-aware sampling policy and introduces, for the first time, a Cost of Action under Uncertainty (CoAU) function. It further develops a stochastic stationary execution policy that maximizes the probability of correct action execution. By leveraging Markov modeling, constrained optimization, and closed-form analysis, the authors derive analytical expressions for both AoII and the probability of error-free actions. The proposed approach significantly improves estimation precision and execution reliability while adhering to stringent sampling constraints.
📝 Abstract
This paper studies efficient data management and timely information dissemination for real-time monitoring of an $N$-state Markov process, enabling accurate state estimation and reliable actuation decisions. First, we analyze the Age of Incorrect Information (AoII) and derive closed-form expressions for its time average under several scheduling policies, including randomized stationary, change-aware randomized stationary, semantics-aware randomized stationary, and threshold-aware randomized stationary policies. We then formulate and solve constrained optimization problems to minimize the average AoII under a time-averaged sampling action constraint, and compare the resulting optimal sampling and transmission policies to identify the conditions under which each policy is most effective. We further show that directly using reconstructed states for actuation can degrade system performance, especially when the receiver is uncertain about the state estimate or when actuation is costly. To address this issue, we introduce a cost function, termed the Cost of Actions under Uncertainty (CoAU), which determines when the actuator should take correct actions and avoid incorrect ones when the receiver is uncertain about the reconstructed source state. We propose a randomized actuation policy and derive a closed-form expression for the probability of taking no incorrect action. Finally, we formulate an optimization problem to find the optimal randomized actuation policy that maximizes this probability. The results show that the resulting policy substantially reduces incorrect actuator actions.