Minimizing Functions of Age of Incorrect Information for Remote Estimation

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the joint minimization of a weighted Age of Incorrect Information (AoII) metric and transmission cost in push-based remote estimation systems. We consider discrete-time Markov sources under estimator desynchronization. To tackle this, we propose an adaptive threshold policy wherein updates are triggered based on the estimated state. Our key contribution is the novel construction of a “dual-regime absorbing Markov chain” (DR-AMC) and its associated “dual-regime phase-type distribution” (DR-PH), enabling exact analytical characterization of the expected value of arbitrary AoII cost functions. Embedding this model within a semi-Markov decision process (SMDP) framework, we rigorously derive an optimal multi-threshold policy structure. Theoretical analysis and numerical experiments demonstrate that the proposed policy significantly outperforms exhaustive search and existing baseline methods under general AoII cost functions, while retaining both analytical tractability and superior practical performance.

Technology Category

Application Category

📝 Abstract
The age of incorrect information (AoII) process which keeps track of the time since the source and monitor processes are in sync, has been extensively used in remote estimation problems. In this paper, we consider a push-based remote estimation system with a discrete-time Markov chain (DTMC) information source transmitting status update packets towards the monitor once the AoII process exceeds a certain estimation-based threshold. In this paper, the time average of an arbitrary function of AoII is taken as the AoII cost, as opposed to using the average AoII as the mismatch metric, whereas this function is also allowed to depend on the estimation value. In this very general setting, our goal is to minimize a weighted sum of AoII and transmission costs. For this purpose, we formulate a discrete-time semi-Markov decision process (SMDP) regarding the multi-threshold status update policy. We propose a novel tool in discrete-time called 'dual-regime absorbing Markov chain' (DR-AMC) and its corresponding absorption time distribution named as 'dual-regime phase-type' (DR-PH) distribution, to obtain the characterizing parameters of the SMDP, which allows us to obtain the distribution of the AoII process for a given policy, and hence the average of any function of AoII. The proposed method is validated with numerical results by which we compare our proposed method against other policies obtained by exhaustive-search, and also various benchmark policies.
Problem

Research questions and friction points this paper is trying to address.

Minimize weighted sum of AoII and transmission costs
Analyze AoII cost via arbitrary functions and estimation values
Develop DR-AMC tool for discrete-time SMDP optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Push-based system with DTMC source
Multi-threshold SMDP policy optimization
Novel DR-AMC tool for AoII analysis
🔎 Similar Papers
No similar papers found.
I
Ismail Cosandal
University of Maryland, College Park, MD, USA
S
S. Ulukus
University of Maryland, College Park, MD, USA
Nail Akar
Nail Akar
Professor of Electrical and Electronics Eng. Dept., Bilkent University
Computer networksperformance evaluationqueuing theorystochastic modelswireless and optical networks