CASE: Cadence-Aware Set Encoding for Large-Scale Next Basket Repurchase Recommendation

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing next-basket repeat purchase recommendation methods struggle to explicitly model calendar time and item-specific repurchase rhythms, and often lack dynamic update capabilities. This work proposes a novel framework that decouples item-level repurchase rhythm learning from cross-item interactions, introducing calendar time signals explicitly into repurchase modeling for the first time. By integrating shared multi-scale temporal convolutions with an induced set attention mechanism of sub-quadratic complexity, the approach enhances both timeliness and accuracy of recommendations while maintaining industrial scalability. Extensive experiments demonstrate that the proposed method significantly outperforms strong baselines across three public benchmarks and a large-scale industrial dataset with tens of millions of users, achieving relative improvements of up to 8.6% in Precision and 9.9% in Recall.
📝 Abstract
Repurchase behavior is a primary signal in large-scale retail recommendation, particularly in categories with frequent replenishment: many items in a user's next basket were previously purchased and their timing follows stable, item-specific cadences. Yet most next basket repurchase recommendation models represent history as a sequence of discrete basket events indexed by visit order, which cannot explicitly model elapsed calendar time or update item rankings as days pass between purchases. We present CASE (Cadence-Aware Set Encoding for next basket repurchase recommendation), which decouples item-level cadence learning from cross-item interaction, enabling explicit calendar-time modeling while remaining production-scalable. CASE represents each item's purchase history as a calendar-time signal over a fixed horizon, applies shared multi-scale temporal convolutions to capture recurring rhythms, and uses induced set attention to model cross-item dependencies with sub-quadratic complexity, allowing efficient batch inference at scale. Across three public benchmarks and a proprietary dataset, CASE consistently improves Precision, Recall, and NDCG at multiple cutoffs compared to strong next basket prediction baselines. In a production-scale evaluation with tens of millions of users and a large item catalog, CASE achieves up to 8.6% relative Precision and 9.9% Recall lift at top-5, demonstrating that scalable cadence-aware modeling yields measurable gains in both benchmark and industrial settings.
Problem

Research questions and friction points this paper is trying to address.

repurchase recommendation
next basket prediction
cadence modeling
calendar-time awareness
temporal dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

cadence-aware modeling
temporal convolution
set attention
next basket recommendation
scalable recommendation
🔎 Similar Papers
No similar papers found.