🤖 AI Summary
To address catastrophic forgetting and task-optimization imbalance in continual multi-task learning for e-commerce user modeling, this paper proposes a prompt-tuning-based incremental user representation method. Our approach introduces (1) task-specific positional prompts as an external memory module to explicitly preserve knowledge from historical tasks, and (2) a context-aware prompt mechanism that models semantic relationships across tasks to enhance knowledge transfer. To the best of our knowledge, this is the first work to systematically integrate prompt tuning into the continual learning paradigm for user modeling. Extensive experiments on real-world e-commerce datasets demonstrate that our method reduces average forgetting by 37% and improves cross-task recommendation AUC by 2.1% over state-of-the-art multi-task and continual learning baselines. The results confirm its effectiveness in simultaneously preserving prior knowledge, maintaining generalization capability, and adapting to new tasks.
📝 Abstract
User modeling in large e-commerce platforms aims to optimize user experiences by incorporating various customer activities. Traditional models targeting a single task often focus on specific business metrics, neglecting the comprehensive user behavior, and thus limiting their effectiveness. To develop more generalized user representations, some existing work adopts Multi-task Learning (MTL)approaches. But they all face the challenges of optimization imbalance and inefficiency in adapting to new tasks. Continual Learning (CL), which allows models to learn new tasks incrementally and independently, has emerged as a solution to MTL's limitations. However, CL faces the challenge of catastrophic forgetting, where previously learned knowledge is lost when the model is learning the new task. Inspired by the success of prompt tuning in Pretrained Language Models (PLMs), we propose PCL, a Prompt-based Continual Learning framework for user modeling, which utilizes position-wise prompts as external memory for each task, preserving knowledge and mitigating catastrophic forgetting. Additionally, we design contextual prompts to capture and leverage inter-task relationships during prompt tuning. We conduct extensive experiments on real-world datasets to demonstrate PCL's effectiveness.