Following the TRAIL: Predicting and Explaining Tomorrow's Hits with a Fine-Tuned LLM

πŸ“… 2026-02-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional recommender systems struggle to accurately model user preferences under sparse interactions, often lack trustworthy explanations, and incur high computational costs for full-scale real-time ranking. To address these challenges, this work proposes TRAILβ€”a fine-tuned large language model (LLM) framework that uniquely unifies short-term item popularity prediction with natural language explanation generation. By aligning structured trend signals with textual explanations through contrastive learning and leveraging carefully constructed positive-negative sample pairs, TRAIL simultaneously enhances prediction accuracy and produces coherent, evidence-based explanations. Extensive experiments demonstrate that the proposed method significantly outperforms strong baselines across multiple metrics, achieving both high accuracy and strong interpretability.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have been widely applied across multiple domains for their broad knowledge and strong reasoning capabilities. However, applying them to recommendation systems is challenging since it is hard for LLMs to extract user preferences from large, sparse user-item logs, and real-time per-user ranking over the full catalog is too time-consuming to be practical. Moreover, many existing recommender systems focus solely on ranking items while overlooking explanations, which could help improve predictive accuracy and make recommendations more convincing to users. Inspired by recent works that achieve strong recommendation performance by forecasting near-term item popularity, we propose TRAIL (TRend and explAnation Integrated Learner). TRAIL is a fine-tuned LLM that jointly predicts short-term item popularity and generates faithful natural-language explanations. It employs contrastive learning with positive and negative pairs to align its scores and explanations with structured trend signals, yielding accurate and explainable popularity predictions. Extensive experiments show that TRAIL outperforms strong baselines and produces coherent, well-grounded explanations.
Problem

Research questions and friction points this paper is trying to address.

recommendation systems
Large Language Models
explainability
short-term popularity prediction
user-item logs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Explainable Recommendation
Contrastive Learning
Popularity Prediction
Fine-tuned LLM
πŸ”Ž Similar Papers
No similar papers found.