π€ AI Summary
Logical puzzle explanations suffer from suboptimal step-wise quality due to the difficulty of designing effective multi-objective scoring functions. To address this, we propose MACHOP, a framework that learns optimal explanation steps via interactive preference learning from user pairwise comparisons. To tackle instability arising from multi-scale sub-objectives and query redundancy, we introduce dynamic normalization and design an active querying strategy that jointly incorporates non-dominated constraints and upper-confidence-bound (UCB)-based diversity. Our method models preference learning as a multi-armed bandit problem, enabling interpretable explanation generation for Sudoku and Logic-Grid puzzles. Extensive simulations and real-user studies demonstrate that MACHOP significantly improves explanation quality and comprehensibility over conventional heuristic and static learning approaches.
π Abstract
Step-wise explanations can explain logic puzzles and other satisfaction problems by showing how to derive decisions step by step. Each step consists of a set of constraints that derive an assignment to one or more decision variables. However, many candidate explanation steps exist, with different sets of constraints and different decisions they derive. To identify the most comprehensible one, a user-defined objective function is required to quantify the quality of each step. However, defining a good objective function is challenging. Here, interactive preference elicitation methods from the wider machine learning community can offer a way to learn user preferences from pairwise comparisons. We investigate the feasibility of this approach for step-wise explanations and address several limitations that distinguish it from elicitation for standard combinatorial problems. First, because the explanation quality is measured using multiple sub-objectives that can vary a lot in scale, we propose two dynamic normalization techniques to rescale these features and stabilize the learning process. We also observed that many generated comparisons involve similar explanations. For this reason, we introduce MACHOP (Multi-Armed CHOice Perceptron), a novel query generation strategy that integrates non-domination constraints with upper confidence bound-based diversification. We evaluate the elicitation techniques on Sudokus and Logic-Grid puzzles using artificial users, and validate them with a real-user evaluation. In both settings, MACHOP consistently produces higher-quality explanations than the standard approach.