🤖 AI Summary
This work addresses the high computational cost and limited generalization of traditional state-value-based approaches in planning tasks by proposing a novel supervised learning framework that directly trains a regularized Q-function. The method models the Q-function using a graph neural network and introduces an action-discriminative regularization term to enhance the distinction between actions selected by the teacher policy and those not selected. To our knowledge, this is the first systematic effort in planning-by-learning to replace state-value functions with Q-functions. Evaluated across ten planning domains, the approach significantly outperforms existing state-value-based policies, achieving performance on par with the state-of-the-art planner LAMA-first while substantially reducing per-step inference cost, thereby improving both inference efficiency and policy robustness.
📝 Abstract
Learning per-domain generalizing policies is a key challenge in learning for planning. Standard approaches learn state-value functions represented as graph neural networks using supervised learning on optimal plans generated by a teacher planner. In this work, we advocate for learning Q-value functions instead. Such policies are drastically cheaper to evaluate for a given state, as they need to process only the current state rather than every successor. Surprisingly, vanilla supervised learning of Q-values performs poorly as it does not learn to distinguish between the actions taken and those not taken by the teacher. We address this by using regularization terms that enforce this distinction, resulting in Q-value policies that consistently outperform state-value policies across a range of 10 domains and are competitive with the planner LAMA-first.