🤖 AI Summary
Fine-tuning large language models (LLMs) for healthcare prediction tasks typically requires access to sensitive patient data, posing significant privacy risks. Method: This paper introduces PatientDx, the first data-free LLM merging framework tailored for clinical prediction—eliminating the need for raw patient records. It integrates multiple pre-trained LLMs, incorporates a numerically grounded reasoning backbone, and employs a sensitive-data-free hyperparameter optimization mechanism to enhance generalization. Contribution/Results: Evaluated on mortality prediction using MIMIC-IV, PatientDx achieves a 7% absolute improvement in AUROC over baseline methods while inherently mitigating data leakage risks. The framework is open-sourced, delivering state-of-the-art predictive performance without compromising stringent privacy guarantees.
📝 Abstract
Fine-tuning of Large Language Models (LLMs) has become the default practice for improving model performance on a given task. However, performance improvement comes at the cost of training on vast amounts of annotated data which could be sensitive leading to significant data privacy concerns. In particular, the healthcare domain is one of the most sensitive domains exposed to data privacy issues. In this paper, we present PatientDx, a framework of model merging that allows the design of effective LLMs for health-predictive tasks without requiring fine-tuning nor adaptation on patient data. Our proposal is based on recently proposed techniques known as merging of LLMs and aims to optimize a building block merging strategy. PatientDx uses a pivotal model adapted to numerical reasoning and tunes hyperparameters on examples based on a performance metric but without training of the LLM on these data. Experiments using the mortality tasks of the MIMIC-IV dataset show improvements up to 7% in terms of AUROC when compared to initial models. Additionally, we confirm that when compared to fine-tuned models, our proposal is less prone to data leak problems without hurting performance. Finally, we qualitatively show the capabilities of our proposal through a case study. Our best model is publicly available at https://huggingface.co/ Jgmorenof/mistral_merged_0_4.