🤖 AI Summary
This paper addresses the task allocation problem for fatigue-prone workers to maximize system throughput and task success rate. To capture the dynamic energy depletion of workers, we model their state evolution as a continuous-time Markov chain (CTMC) and design an online scheduling policy based on real-time state sampling. We propose a novel threshold-based sampling mechanism that transforms scheduling decisions into a non-convex ratio-sum optimization problem. For solution, we develop a branch-and-bound algorithm that yields analytically optimal solutions under strict efficiency constraints and achieves globally near-optimal approximations under moderate computational constraints. Experimental results demonstrate significant improvements in system efficiency and task completion rates. The framework provides both theoretical guarantees and practical applicability for real-time scheduling in dynamically resource-constrained environments.
📝 Abstract
We consider the problem of assigning tasks efficiently to a set of workers that can exhaust themselves as a result of processing tasks. If a worker is exhausted, it will take a longer time to recover. To model efficiency of workers with exhaustion, we use a continuous-time Markov chain (CTMC). By taking samples from the internal states of the workers, the source assigns tasks to the workers when they are found to be in their efficient states. We consider two different settings where (i) the source can assign tasks to the workers only when they are in their most efficient state, and (ii) it can assign tasks to workers when they are also moderately efficient in spite of a potentially reduced success probability. In the former case, we find the optimal policy to be a threshold-based sampling policy where the thresholds depend on the workers' recovery and exhaustion rates. In the latter case, we solve a non-convex sum-of-ratios problem using a branch-and-bound approach which performs well compared with the globally optimal solution.