A Contrastive Pretrain Model with Prompt Tuning for Multi-center Medication Recommendation

📅 2024-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data sparsity in small hospitals and cross-center distribution heterogeneity in multi-center drug recommendation, this paper proposes a two-stage framework. First, it employs masked prediction and contrastive learning for pretraining to explicitly model both intra- and inter-relationship patterns among diagnoses and procedures. Second, it adopts lightweight learnable soft prompts (prompt tuning) for center-specific adaptation. This work introduces the first multi-center drug recommendation paradigm that synergistically integrates self-supervised contrastive pretraining with prompt tuning—thereby avoiding catastrophic forgetting induced by full-parameter fine-tuning while explicitly capturing cross-center data heterogeneity. Evaluated on the eICU multi-center dataset, our method significantly outperforms state-of-the-art approaches in recommendation accuracy and generalization across centers. The source code is publicly released to ensure reproducibility.

Technology Category

Application Category

📝 Abstract
Medication recommendation is one of the most critical health-related applications, which has attracted extensive research interest recently. Most existing works focus on a single hospital with abundant medical data. However, many small hospitals only have a few records, which hinders applying existing medication recommendation works to the real world. Thus, we seek to explore a more practical setting, i.e., multi-center medication recommendation. In this setting, most hospitals have few records, but the total number of records is large. Though small hospitals may benefit from total affluent records, it is also faced with the challenge that the data distributions between various hospitals are much different. In this work, we introduce a novel conTrastive prEtrain Model with Prompt Tuning (TEMPT) for multi-center medication recommendation, which includes two stages of pretraining and finetuning. We first design two self-supervised tasks for the pretraining stage to learn general medical knowledge. They are mask prediction and contrastive tasks, which extract the intra- and inter-relationships of input diagnosis and procedures. Furthermore, we devise a novel prompt tuning method to capture the specific information of each hospital rather than adopting the common finetuning. On the one hand, the proposed prompt tuning can better learn the heterogeneity of each hospital to fit various distributions. On the other hand, it can also relieve the catastrophic forgetting problem of finetuning. To validate the proposed model, we conduct extensive experiments on the public eICU, a multi-center medical dataset. The experimental results illustrate the effectiveness of our model. The implementation code is available to ease the reproducibility https://github.com/Applied-Machine-Learning-Lab/TEMPT.
Problem

Research questions and friction points this paper is trying to address.

Multi-center Drug Recommendation
Data Scarcity in Small Hospitals
Inter-hospital Data Variability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Medical Knowledge Learning
Self-supervised Learning
Multi-center Drug Recommendation
🔎 Similar Papers
Qidong Liu
Qidong Liu
Assistant Professor, Xi'an Jiaotong University
Recommender SystemLarge Language ModelIntelligent HealthcareCausal InferenceSmart Education
Zhaopeng Qiu
Zhaopeng Qiu
NVIDIA
LLMNLPRecommender SystemData Mining
X
Xiangyu Zhao
City University of Hong Kong, China
X
Xian Wu
Jarvis Research Center, Tencent YouTu Lab, China
Z
Zijian Zhang
Jilin University & City University of Hong Kong, China
T
Tong Xu
University of Science and Technology of China, China
F
Feng Tian
Xi’an Jiaotong University, China