Large Language Model Aided QoS Prediction for Service Recommendation

📅 2024-08-05
🏛️ 2025 IEEE International Conference on Software Services Engineering (SSE)
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address data sparsity and cold-start challenges in Web service QoS prediction, this paper pioneers the integration of large language models (LLMs) into QoS prediction—without fine-tuning. We propose a semantic enhancement method that performs zero-shot semantic encoding of natural-language descriptions of users and services using an LLM, extracting highly discriminative textual features. These features are fused with historical interaction-based QoS data via multi-source integration to construct a regression-based prediction model. The approach effectively alleviates data sparsity and enables fine-grained, low-sparse QoS estimation. Experiments on the WSDream dataset demonstrate that our method significantly outperforms state-of-the-art baselines, reducing RMSE by up to 12.7% under cold-start and high-sparsity conditions. This validates the efficacy and robustness of semantic knowledge transfer for QoS prediction.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have seen rapid improvement in the recent years, and have been used in a wider range of applications. After being trained on large text corpus, LLMs obtain the capability of extracting rich features from textual data. Such capability is potentially useful for the web service recommendation task, where the web users and services have intrinsic attributes that can be described using natural language sentences and are useful for recommendation. In this paper, we explore the possibility and practicality of using LLMs for web service recommendation. We propose the large language model aided QoS prediction (llmQoS) model, which use LLMs to extract useful information from attributes of web users and services via descriptive sentences. This information is then used in combination with the QoS values of historical interactions of users and services, to predict QoS values for any given user-service pair. On the WSDream dataset, llmQoS is shown to overcome the data sparsity issue inherent to the QoS prediction problem, and outperforms comparable baseline models consistently.
Problem

Research questions and friction points this paper is trying to address.

Predicting QoS values for web service recommendation
Overcoming data sparsity in QoS prediction using LLMs
Extracting features from user and service textual attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs extract features from descriptive sentences
Combines extracted features with historical QoS data
Overcomes data sparsity in QoS prediction
🔎 Similar Papers
No similar papers found.
H
Huiying Liu
School of Computer Science and Technology, Anhui University, Hefei, Anhui, China
Zekun Zhang
Zekun Zhang
Stony Brook University
Computer VisionMachine Learning
Honghao Li
Honghao Li
Anhui University
CTR PredictionRecommender system
Q
Qilin Wu
School of Information Engineering, Chaohu University, Hefei, Anhui, China
Y
Yiwen Zhang
School of Computer Science and Technology, Anhui University, Hefei, Anhui, China