Tracking and Assigning Jobs to a Markov Machine

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates dynamic task assignment in time-sliced communication systems with two-state (busy/idle) Markovian machines, aiming to jointly minimize task dropping penalties and state information inaccuracy costs. The system comprises a user queue, a cloud server, and a sampler; the server must decide whether to dispatch tasks based on imperfect, error-prone state samples. We innovatively introduce “erroneous information age” to quantify sampling cost; rigorously formulate the problem as a partially observable Markov decision process (POMDP); and—uniquely—prove the optimality of a state-untruncated threshold policy, providing its necessary and sufficient conditions. We derive a closed-form expression for the optimal threshold, explicitly characterizing how system parameters quantitatively shape policy structure. These results establish a theoretical foundation and yield implementable policies for low-overhead, high-reliability, state-aware resource scheduling.

Technology Category

Application Category

📝 Abstract
We consider a time-slotted communication system with a machine, a cloud server, and a sampler. Job requests from the users are queued on the server to be completed by the machine. The machine has two states, namely, a busy state and a free state. The server can assign a job to the machine in a first-in-first-served manner. If the machine is free, it completes the job request from the server; otherwise, it drops the request. Upon dropping a job request, the server is penalized. When the machine is in the free state, the machine can get into the busy state with an internal job. When the server does not assign a job request to the machine, the state of the machine evolves as a symmetric Markov chain. If the machine successfully accepts the job request from the server, the state of the machine goes to the busy state and follows a different dynamics compared to the dynamics when the machine goes to the busy state due to an internal job. The sampler samples the state of the machine and sends it to the server via an error-free channel. Thus, the server can estimate the state of the machine, upon receiving an update from the source. If the machine is in the free state but the estimated state at the server is busy, the sampler pays a cost. We incorporate the concept of the age of incorrect information to model the cost of the sampler. We aim to find an optimal sampling policy such that the cost of the sampler plus the penalty on the machine gets minimized. We formulate this problem in a Markov decision process framework and find how an optimal policy changes with several associated parameters. We show that a threshold policy is optimal for this problem. We show a necessary and sufficient condition for a threshold policy to be optimal. Finally, we find the optimal threshold without bounding the state space.
Problem

Research questions and friction points this paper is trying to address.

Optimize job assignment to Markov machine
Minimize sampler cost and server penalty
Determine optimal sampling policy threshold
Innovation

Methods, ideas, or system contributions that make the work stand out.

Markov decision process framework
Optimal threshold policy
Age of incorrect information
🔎 Similar Papers
No similar papers found.
Subhankar Banerjee
Subhankar Banerjee
UNIVERSITY OF MARYLAND, COLLEGE PARK
Age of InformationWireless CommunicationInformation Theory
S
S. Ulukus
Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742