🤖 AI Summary
This work addresses the challenge of balancing prediction accuracy and update frequency in task completion time announcements to mitigate trust erosion and replanning costs. It formalizes this problem for the first time as a partially observable Markov decision process (POMDP) and introduces a mixed observability MDP (MOMDP) structure to enhance computational tractability. By designing a multi-objective reward function that jointly optimizes prediction error and update stability, and integrating it with belief state evolution, the approach adaptively generates an optimal announcement policy. Experimental results demonstrate that the proposed method maintains or improves prediction accuracy while reducing unnecessary updates by up to 75%, thereby significantly enhancing the stability of time announcements.
📝 Abstract
Managing announced task completion times is a fundamental control problem in project management. While extensive research exists on estimating task durations and task scheduling, the problem of when and how to update completion times communicated to stakeholders remains understudied. Organizations must balance announcement accuracy against the costs of frequent timeline updates, which can erode stakeholder trust and trigger costly replanning. Despite the prevalence of this problem, current approaches rely on static predictions or ad-hoc policies that fail to account for the sequential nature of announcement management. In this paper, we formulate the task announcement problem as a Partially Observable Markov Decision Process (POMDP) where the control policy must decide when to update announced completion times based on noisy observations of true task completion. Since most state variables (current time and previous announcements) are fully observable, we leverage the Mixed Observability MDP (MOMDP) framework to enable more efficient policy optimization. Our reward structure captures the dual costs of announcement errors and update frequency, enabling synthesis of optimal announcement control policies. Using off-the-shelf solvers, we generate policies that act as feedback controllers, adaptively managing announcements based on belief state evolution. Simulation results demonstrate significant improvements in both accuracy and announcement stability compared to baseline strategies, achieving up to 75\% reduction in unnecessary updates while maintaining or improving prediction accuracy.