🤖 AI Summary
This work investigates algorithmic feature selection in AI-assisted decision-making to jointly optimize short-term predictive accuracy and long-term human learning. Motivated by humans’ dynamic learning during repeated interactions, we formulate the “pedagogically aware feature recommendation” problem—first formalizing the trade-off between informativeness (for learning) and alignment (to reduce cognitive load). Methodologically, we integrate time-discounted utility optimization, model misalignment characterization, and combinatorial feature selection theory, yielding an optimal policy with a stationary subset-sequence structure. Our analysis reveals nonlinear effects of human patience and learning capacity on feature selection. Experiments demonstrate that the proposed strategy simultaneously improves prediction accuracy and human model calibration. Theoretically, we prove that algorithmic knowledge error has bounded impact, as final decision authority remains with the human user.
📝 Abstract
AI systems increasingly support human decision-making. In many cases, despite the algorithm's superior performance, the final decision remains in human hands. For example, an AI may assist doctors in determining which diagnostic tests to run, but the doctor ultimately makes the diagnosis. This paper studies such AI-assisted decision-making settings, where the human learns through repeated interactions with the algorithm. In our framework, the algorithm -- designed to maximize decision accuracy according to its own model -- determines which features the human can consider. The human then makes a prediction based on their own less accurate model. We observe that the discrepancy between the algorithm's model and the human's model creates a fundamental tradeoff. Should the algorithm prioritize recommending more informative features, encouraging the human to recognize their importance, even if it results in less accurate predictions in the short term until learning occurs? Or is it preferable to forgo educating the human and instead select features that align more closely with their existing understanding, minimizing the immediate cost of learning? This tradeoff is shaped by the algorithm's time-discounted objective and the human's learning ability. Our results show that optimal feature selection has a surprisingly clean combinatorial characterization, reducible to a stationary sequence of feature subsets that is tractable to compute. As the algorithm becomes more"patient"or the human's learning improves, the algorithm increasingly selects more informative features, enhancing both prediction accuracy and the human's understanding. Notably, early investment in learning leads to the selection of more informative features than a later investment. We complement our analysis by showing that the impact of errors in the algorithm's knowledge is limited as it does not make the prediction directly.