🤖 AI Summary
Existing sequence recommendation methods often employ cross-entropy loss, which overly emphasizes pointwise accuracy and neglects top-K utility and user satisfaction. To address this, we propose CPFT—a conformal prediction–driven, fine-grained confidence-aware fine-tuning framework. CPFT is the first to integrate conformal prediction theory into sequence recommendation fine-tuning, introducing a differentiable surrogate loss and a dynamic calibration mechanism that jointly optimizes top-K precision and confidence calibration fidelity while guaranteeing statistical coverage. It is model-agnostic and requires no architectural modifications to mainstream sequential recommenders. Extensive experiments across five real-world datasets and four baseline architectures demonstrate that CPFT consistently improves Precision@K (average gain +2.1%) and reduces Expected Calibration Error (average reduction −38.7%), thereby enhancing recommendation accuracy, reliability, and user satisfaction.
📝 Abstract
In Sequential Recommendation Systems (SRecsys), traditional training approaches that rely on Cross-Entropy (CE) loss often prioritize accuracy but fail to align well with user satisfaction metrics. CE loss focuses on maximizing the confidence of the ground truth item, which is challenging to achieve universally across all users and sessions. It also overlooks the practical acceptability of ranking the ground truth item within the top-$K$ positions, a common metric in SRecsys. To address this limitation, we propose extbf{CPFT}, a novel fine-tuning framework that integrates Conformal Prediction (CP)-based losses with CE loss to optimize accuracy alongside confidence that better aligns with widely used top-$K$ metrics. CPFT embeds CP principles into the training loop using differentiable proxy losses and computationally efficient calibration strategies, enabling the generation of high-confidence prediction sets. These sets focus on items with high relevance while maintaining robust coverage guarantees. Extensive experiments on five real-world datasets and four distinct sequential models demonstrate that CPFT improves precision metrics and confidence calibration. Our results highlight the importance of confidence-aware fine-tuning in delivering accurate, trustworthy recommendations that enhance user satisfaction.