🤖 AI Summary
Commercial recommendation systems must dynamically adapt to shifting task objectives (e.g., accuracy–diversity trade-offs), yet retraining models incurs prohibitive computational overhead, hindering online real-time adaptation. To address this, we propose a parameter-level, training-free, real-time controllable adaptation method. Our approach is the first to construct a controllable diffusion model directly in the parameter space and leverage classifier-free guidance to generate task-specific model parameters zero-shot and on-demand. It enables test-time adaptive inference and model-agnostic deployment. Evaluated on multiple public and real-world industrial datasets, our method significantly enhances multi-objective controllability while maintaining state-of-the-art recommendation performance. Parameter generation takes under one second—over 300× faster than full retraining—demonstrating unprecedented efficiency for dynamic recommendation adaptation.
📝 Abstract
Commercial recommender systems face the challenge that task requirements from platforms or users often change dynamically (e.g., varying preferences for accuracy or diversity). Ideally, the model should be re-trained after resetting a new objective function, adapting to these changes in task requirements. However, in practice, the high computational costs associated with retraining make this process impractical for models already deployed to online environments. This raises a new challenging problem: how to efficiently adapt the learning model to different task requirements by controlling model parameters after deployment, without the need for retraining. To address this issue, we propose a novel controllable learning approach via Parameter Diffusion for controllable multi-task Recommendation (PaDiRec), which allows the customization and adaptation of recommendation model parameters to new task requirements without retraining. Specifically, we first obtain the optimized model parameters through adapter tunning based on the feasible task requirements. Then, we utilize the diffusion model as a parameter generator, employing classifier-free guidance in conditional training to learn the distribution of optimized model parameters under various task requirements. Finally, the diffusion model is applied to effectively generate model parameters in a test-time adaptation manner given task requirements. As a model-agnostic approach, PaDiRec can leverage existing recommendation models as backbones to enhance their controllability. Extensive experiments on public datasets and a dataset from a commercial app, indicate that PaDiRec can effectively enhance controllability through efficient model parameter generation. The code is released at https://anonymous.4open.science/r/PaDiRec-DD13.