🤖 AI Summary
In operations research, reinforcement learning (RL) often faces deployment barriers due to low human trust. To address this, we propose an expert-knowledge embedding method based on action masking, which dynamically encodes domain-specific heuristic rules as constraints on admissible actions—thereby jointly enhancing policy interpretability and robustness. We provide the first systematic analysis of action masking under constrained exploration, formally characterizing its sufficient conditions for improving convergence speed while preserving optimality. Within the PPO and SAC frameworks, we evaluate our approach across three heterogeneous operational tasks—job scheduling, load management, and inventory control. Results show >40% faster policy convergence versus unmasked baselines and significantly improved managerial trust and adoption in real-world deployments. Our core contribution is a principled, interpretable, and tunable action-masking paradigm that achieves synergistic optimization of RL reliability and performance.
📝 Abstract
Reinforcement learning (RL) provides a powerful method to address problems in operations research. However, its real-world application often fails due to a lack of user acceptance and trust. A possible remedy is to provide managers with the possibility of altering the RL policy by incorporating human expert knowledge. In this study, we analyze the benefits and caveats of including human knowledge via action masking. While action masking has so far been used to exclude invalid actions, its ability to integrate human expertise remains underexplored. Human knowledge is often encapsulated in heuristics, which suggest reasonable, near-optimal actions in certain situations. Enforcing such actions should hence increase trust among the human workforce to rely on the model's decisions. Yet, a strict enforcement of heuristic actions may also restrict the policy from exploring superior actions, thereby leading to overall lower performance. We analyze the effects of action masking based on three problems with different characteristics, namely, paint shop scheduling, peak load management, and inventory management. Our findings demonstrate that incorporating human knowledge through action masking can achieve substantial improvements over policies trained without action masking. In addition, we find that action masking is crucial for learning effective policies in constrained action spaces, where certain actions can only be performed a limited number of times. Finally, we highlight the potential for suboptimal outcomes when action masks are overly restrictive.