🤖 AI Summary
To address data sparsity and cold-start challenges in Web service QoS prediction, this paper pioneers the integration of large language models (LLMs) into QoS prediction—without fine-tuning. We propose a semantic enhancement method that performs zero-shot semantic encoding of natural-language descriptions of users and services using an LLM, extracting highly discriminative textual features. These features are fused with historical interaction-based QoS data via multi-source integration to construct a regression-based prediction model. The approach effectively alleviates data sparsity and enables fine-grained, low-sparse QoS estimation. Experiments on the WSDream dataset demonstrate that our method significantly outperforms state-of-the-art baselines, reducing RMSE by up to 12.7% under cold-start and high-sparsity conditions. This validates the efficacy and robustness of semantic knowledge transfer for QoS prediction.
📝 Abstract
Large language models (LLMs) have seen rapid improvement in the recent years, and have been used in a wider range of applications. After being trained on large text corpus, LLMs obtain the capability of extracting rich features from textual data. Such capability is potentially useful for the web service recommendation task, where the web users and services have intrinsic attributes that can be described using natural language sentences and are useful for recommendation. In this paper, we explore the possibility and practicality of using LLMs for web service recommendation. We propose the large language model aided QoS prediction (llmQoS) model, which use LLMs to extract useful information from attributes of web users and services via descriptive sentences. This information is then used in combination with the QoS values of historical interactions of users and services, to predict QoS values for any given user-service pair. On the WSDream dataset, llmQoS is shown to overcome the data sparsity issue inherent to the QoS prediction problem, and outperforms comparable baseline models consistently.