Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical security vulnerability in medical large language models (LLMs) during supervised fine-tuning, where existing research has predominantly focused on output-label backdoors while overlooking poisoning risks targeting the reasoning process itself. The authors propose a novel few-shot chain-of-thought poisoning method that injects corrupted reasoning chains into the fine-tuning data, shifting the attack objective from manipulating final outputs to degrading internal reasoning. This approach achieves precise, trigger-free performance degradation that is difficult to detect. Experimental results demonstrate that only a small number of poisoned samples can significantly impair the model’s reasoning accuracy on specific medical topics, outperforming baseline methods such as knowledge overriding. The findings reveal a previously underexplored attack surface in the fine-tuning phase of medical LLMs, highlighting urgent implications for their safety and reliability.

Technology Category

Application Category

📝 Abstract
Supervised fine-tuning (SFT) is essential for the development of medical large language models (LLMs), yet prior poisoning studies have mainly focused on the detectable backdoor attacks. We propose a novel poisoning attack targeting the reasoning process of medical LLMs during SFT. Unlike backdoor attacks, our method injects poisoned rationales into few-shot training data, leading to stealthy degradation of model performance on targeted medical topics. Results showed that knowledge overwriting was ineffective, while rationale poisoning caused significant decline on the accuracy of the target subject, as long as no correct samples of the same subject appear in the dataset. A minimum number and ratio of poisoned samples was needed to carry out an effective and stealthy attack, which was more efficient and accurate than catastrophic forgetting. We demonstrate though this study the risk of SFT-stage poisoning, hoping to spur more studies of defense in the sensitive medical domain.
Problem

Research questions and friction points this paper is trying to address.

fine-tuning poisoning
medical LLMs
rationale poisoning
stealthy attack
few-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

rationale poisoning
few-shot learning
medical LLMs
supervised fine-tuning
stealthy attack
🔎 Similar Papers
No similar papers found.
J
Jingyuan Xie
Department of Electronics Engineering, Tsinghua University, Beijing, China
W
Wenjie Wang
Department of Electronics Engineering, Tsinghua University, Beijing, China
Ji Wu
Ji Wu
Tsinghua University
Artificial Intelligence,smart healthcaremachine learningpattern recognitionspeech recognition
J
Jiandong Gao
Department of Electronics Engineering, Tsinghua University, Beijing, China