Not Just What, But When: Integrating Irregular Intervals to LLM for Sequential Recommendation

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sequential recommendation methods overlook the dynamic nature of irregular time intervals between user interactions, hindering accurate modeling of individual behavioral patterns and lacking a temporal perspective for cold-start evaluation. To address this, we propose IntervalLLM—a novel framework that integrates dynamic time-interval modeling into large language models for the first time. It introduces an interval-fused attention mechanism to jointly learn item and temporal representations. Crucially, we pioneer the “time-interval perspective” as a new dimension for cold-start assessment, systematically covering three granular scenarios: user-, item-, and interval-level cold/hot starts. Extensive experiments on three benchmark datasets demonstrate an average performance improvement of 4.4% over state-of-the-art baselines, with significantly enhanced robustness and generalization across diverse cold-start settings.

Technology Category

Application Category

📝 Abstract
Time intervals between purchasing items are a crucial factor in sequential recommendation tasks, whereas existing approaches focus on item sequences and often overlook by assuming the intervals between items are static. However, dynamic intervals serve as a dimension that describes user profiling on not only the history within a user but also different users with the same item history. In this work, we propose IntervalLLM, a novel framework that integrates interval information into LLM and incorporates the novel interval-infused attention to jointly consider information of items and intervals. Furthermore, unlike prior studies that address the cold-start scenario only from the perspectives of users and items, we introduce a new viewpoint: the interval perspective to serve as an additional metric for evaluating recommendation methods on the warm and cold scenarios. Extensive experiments on 3 benchmarks with both traditional- and LLM-based baselines demonstrate that our IntervalLLM achieves not only 4.4% improvements in average but also the best-performing warm and cold scenarios across all users, items, and the proposed interval perspectives. In addition, we observe that the cold scenario from the interval perspective experiences the most significant performance drop among all recommendation methods. This finding underscores the necessity of further research on interval-based cold challenges and our integration of interval information in the realm of sequential recommendation tasks. Our code is available here: https://github.com/sony/ds-research-code/tree/master/recsys25-IntervalLLM.
Problem

Research questions and friction points this paper is trying to address.

Incorporating dynamic time intervals into sequential recommendation models
Addressing cold-start scenarios from an interval perspective
Improving recommendation accuracy by jointly modeling items and intervals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates irregular intervals into LLM
Uses interval-infused attention mechanism
Introduces interval perspective for cold-start
🔎 Similar Papers
No similar papers found.