🤖 AI Summary
This work addresses the problem of identifying unintentionally omitted items during supermarket shopping and providing interpretable recommendations—previously unaddressed in Next Basket Prediction (NBP) research. We formally define the novel task of “Forgotten Item Prediction” (FIP), bridging critical gaps in omission detection and decision transparency within sequential retail recommendation. To this end, we propose two plug-and-play, unsupervised, interpretable modeling algorithms that require no labeled forgotten-item annotations. Our methods jointly exploit shopping sequence pattern mining and rule-based explanation generation to deliver both high-accuracy identification of forgotten items and human-understandable, logic-driven justifications. Extensive experiments on real-world retail datasets demonstrate that our approach outperforms the best NBP baselines by 10–15% across multiple metrics—including precision, recall, and explanation fidelity—significantly improving both predictive accuracy and recommendation trustworthiness.
📝 Abstract
Accurately identifying items forgotten during a supermarket visit and providing clear, interpretable explanations for recommending them remains an underexplored problem within the Next Basket Prediction (NBP) domain. Existing NBP approaches typically only focus on forecasting future purchases, without explicitly addressing the detection of unintentionally omitted items. This gap is partly due to the scarcity of real-world datasets that allow for the reliable estimation of forgotten items. Furthermore, most current NBP methods rely on black-box models, which lack transparency and limit the ability to justify recommendations to end users. In this paper, we formally introduce the forgotten item prediction task and propose two novel interpretable-by-design algorithms. These methods are tailored to identify forgotten items while offering intuitive, human-understandable explanations. Experiments on a real-world retail dataset show our algorithms outperform state-of-the-art NBP baselines by 10-15% across multiple evaluation metrics.