🤖 AI Summary
Clinical decision support systems face a fundamental trade-off between prediction accuracy and explanation consistency, exacerbated by exposure bias that decouples rationales from predictions. To address this, we propose a two-stage rationale-driven multi-task collaborative training framework: Stage I focuses exclusively on rationale generation; Stage II jointly optimizes label prediction and rationale generation, incorporating scheduled sampling to mitigate exposure bias. We introduce the first robust rationale-source adaptation mechanism—supporting LLM-generated, nurse-authored, and post-processed rationales—with efficient pretraining requiring only LLM-derived rationales, drastically reducing reliance on manual annotation. Evaluated across three medical datasets, our model achieves superior F1 scores and rationale fidelity compared to mainstream fine-tuned baselines and select zero-shot large language models, while employing only 1/40 the parameters of contemporary foundation models—enabling both high performance and lightweight deployment.
📝 Abstract
Despite the wide adoption of Large Language Models (LLM)s, clinical decision support systems face a critical challenge: achieving high predictive accuracy while generating explanations aligned with the predictions. Current approaches suffer from exposure bias leading to misaligned explanations. We propose Reason2Decide, a two-stage training framework that addresses key challenges in self-rationalization, including exposure bias and task separation. In Stage-1, our model is trained on rationale generation, while in Stage-2, we jointly train on label prediction and rationale generation, applying scheduled sampling to gradually transition from conditioning on gold labels to model predictions. We evaluate Reason2Decide on three medical datasets, including a proprietary triage dataset and public biomedical QA datasets. Across model sizes, Reason2Decide outperforms other fine-tuning baselines and some zero-shot LLMs in prediction (F1) and rationale fidelity (BERTScore, BLEU, LLM-as-a-Judge). In triage, Reason2Decide is rationale source-robust across LLM-generated, nurse-authored, and nurse-post-processed rationales. In our experiments, while using only LLM-generated rationales in Stage-1, Reason2Decide outperforms other fine-tuning variants. This indicates that LLM-generated rationales are suitable for pretraining models, reducing reliance on human annotations. Remarkably, Reason2Decide achieves these gains with models 40x smaller than contemporary foundation models, making clinical reasoning more accessible for resource-constrained deployments while still providing explainable decision support.