Semi-Markov Decision Process Framework for Age of Incorrect Information Minimization

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the joint optimization of Age of Incorrect Information (AoII) and transmission cost in remote estimation systems, considering push-based update channels with general discrete-time phase-type distributed delays and finite-state Markov sources. We propose a multi-threshold update policy framework grounded in Semi-Markov Decision Processes (SMDPs). To accurately model AoII evolution and heterogeneous delays, we innovatively introduce Dual-Mechanism Absorbing Markov Chains (DR-AMCs) and Dual-Mechanism Discrete Phase-Type distributions (DR-DPHs) for characterizing update absorption times. We theoretically derive the optimal multi-threshold structure and incorporate a feedback-driven mechanism for dynamic update control. Experimental results demonstrate that our strategy significantly reduces the weighted average AoII and transmission cost under general channel delays, outperforming conventional Age-of-Information approaches in semantic-level timeliness guarantees.

Technology Category

Application Category

📝 Abstract
For a remote estimation system, we study age of incorrect information (AoII), which is a recently proposed semantic-aware freshness metric. In particular, we assume an information source observing a discrete-time finite-state Markov chain (DTMC) and employing push-based transmissions of status update packets towards the monitor which is tasked with remote estimation of the source. The source-to-monitor channel delay is assumed to have a general discrete-time phase-type (DPH) distribution, whereas the zero-delay reverse channel ensures that the source has perfect information on AoII and the remote estimate. A multi-threshold transmission policy is employed where packet transmissions are initiated when the AoII process exceeds a threshold which may be different for each estimation value. In this general setting, our goal is to minimize the weighted sum of time average of an arbitrary function of AoII and estimation, and transmission costs, by suitable choice of the thresholds. We formulate the problem as a semi-Markov decision process (SMDP) with the same state-space as the original DTMC to obtain the optimum multi-threshold policy whereas the parameters of the SMDP are obtained by using a novel stochastic tool called dual-regime absorbing Markov chain (DR-AMC), and its corresponding absorption time distribution named as dual-regime DPH (DR-DPH).
Problem

Research questions and friction points this paper is trying to address.

Minimizes Age of Incorrect Information for remote estimation systems
Optimizes multi-threshold transmission policy under channel delays
Formulates problem as Semi-Markov Decision Process for cost reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

SMDP framework for AoII minimization
Multi-threshold policy with state-dependent thresholds
DR-AMC tool for SMDP parameter derivation
I
Ismail Cosandal
University of Maryland, College Park, MD, USA
S
S. Ulukus
University of Maryland, College Park, MD, USA
Nail Akar
Nail Akar
Professor of Electrical and Electronics Eng. Dept., Bilkent University
Computer networksperformance evaluationqueuing theorystochastic modelswireless and optical networks