🤖 AI Summary
This paper addresses the joint minimization of a weighted Age of Incorrect Information (AoII) metric and transmission cost in push-based remote estimation systems. We consider discrete-time Markov sources under estimator desynchronization. To tackle this, we propose an adaptive threshold policy wherein updates are triggered based on the estimated state. Our key contribution is the novel construction of a “dual-regime absorbing Markov chain” (DR-AMC) and its associated “dual-regime phase-type distribution” (DR-PH), enabling exact analytical characterization of the expected value of arbitrary AoII cost functions. Embedding this model within a semi-Markov decision process (SMDP) framework, we rigorously derive an optimal multi-threshold policy structure. Theoretical analysis and numerical experiments demonstrate that the proposed policy significantly outperforms exhaustive search and existing baseline methods under general AoII cost functions, while retaining both analytical tractability and superior practical performance.
📝 Abstract
The age of incorrect information (AoII) process which keeps track of the time since the source and monitor processes are in sync, has been extensively used in remote estimation problems. In this paper, we consider a push-based remote estimation system with a discrete-time Markov chain (DTMC) information source transmitting status update packets towards the monitor once the AoII process exceeds a certain estimation-based threshold. In this paper, the time average of an arbitrary function of AoII is taken as the AoII cost, as opposed to using the average AoII as the mismatch metric, whereas this function is also allowed to depend on the estimation value. In this very general setting, our goal is to minimize a weighted sum of AoII and transmission costs. For this purpose, we formulate a discrete-time semi-Markov decision process (SMDP) regarding the multi-threshold status update policy. We propose a novel tool in discrete-time called 'dual-regime absorbing Markov chain' (DR-AMC) and its corresponding absorption time distribution named as 'dual-regime phase-type' (DR-PH) distribution, to obtain the characterizing parameters of the SMDP, which allows us to obtain the distribution of the AoII process for a given policy, and hence the average of any function of AoII. The proposed method is validated with numerical results by which we compare our proposed method against other policies obtained by exhaustive-search, and also various benchmark policies.