Reason2Decide: Rationale-Driven Multi-Task Learning

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical decision support systems face a fundamental trade-off between prediction accuracy and explanation consistency, exacerbated by exposure bias that decouples rationales from predictions. To address this, we propose a two-stage rationale-driven multi-task collaborative training framework: Stage I focuses exclusively on rationale generation; Stage II jointly optimizes label prediction and rationale generation, incorporating scheduled sampling to mitigate exposure bias. We introduce the first robust rationale-source adaptation mechanism—supporting LLM-generated, nurse-authored, and post-processed rationales—with efficient pretraining requiring only LLM-derived rationales, drastically reducing reliance on manual annotation. Evaluated across three medical datasets, our model achieves superior F1 scores and rationale fidelity compared to mainstream fine-tuned baselines and select zero-shot large language models, while employing only 1/40 the parameters of contemporary foundation models—enabling both high performance and lightweight deployment.

Technology Category

Application Category

📝 Abstract
Despite the wide adoption of Large Language Models (LLM)s, clinical decision support systems face a critical challenge: achieving high predictive accuracy while generating explanations aligned with the predictions. Current approaches suffer from exposure bias leading to misaligned explanations. We propose Reason2Decide, a two-stage training framework that addresses key challenges in self-rationalization, including exposure bias and task separation. In Stage-1, our model is trained on rationale generation, while in Stage-2, we jointly train on label prediction and rationale generation, applying scheduled sampling to gradually transition from conditioning on gold labels to model predictions. We evaluate Reason2Decide on three medical datasets, including a proprietary triage dataset and public biomedical QA datasets. Across model sizes, Reason2Decide outperforms other fine-tuning baselines and some zero-shot LLMs in prediction (F1) and rationale fidelity (BERTScore, BLEU, LLM-as-a-Judge). In triage, Reason2Decide is rationale source-robust across LLM-generated, nurse-authored, and nurse-post-processed rationales. In our experiments, while using only LLM-generated rationales in Stage-1, Reason2Decide outperforms other fine-tuning variants. This indicates that LLM-generated rationales are suitable for pretraining models, reducing reliance on human annotations. Remarkably, Reason2Decide achieves these gains with models 40x smaller than contemporary foundation models, making clinical reasoning more accessible for resource-constrained deployments while still providing explainable decision support.
Problem

Research questions and friction points this paper is trying to address.

Addresses exposure bias in clinical decision support systems
Improves alignment between predictions and generated explanations
Enables effective self-rationalization with smaller language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage training for self-rationalization and prediction
Scheduled sampling to align explanations with predictions
Uses LLM-generated rationales to reduce human annotation
🔎 Similar Papers
No similar papers found.
H
H M Quamran Hasan
University of Alberta
H
Housam Khalifa Bashier
University of Alberta
J
Jiayi Dai
University of Alberta
M
Mi-Young Kim
Camrose, Alberta, Canada
Randy Goebel
Randy Goebel
Professor of Computing Science, University of Alberta
artificial intelligencelogical reasoningvisualizationmachine learningnatural language processing