🤖 AI Summary
Medical large language models (LLMs) deployed in conversational assistants suffer from an imbalance between safety and utility—exhibiting excessive compliance with harmful requests and undue rejection of benign queries. Method: We propose an iterative, multi-stage alignment framework that innovatively integrates Kahneman–Tversky heuristic modeling with direct preference optimization (DPO) and the KTO algorithm. This approach systematically uncovers cognitive biases in safety calibration across diverse model architectures and rigorously defines the complementary roles of self-assessment and external discriminators. Evaluation leverages the CARES-18K benchmark, incorporating fine-tuned discriminators, human annotation, and automated metrics across multiple alignment iterations. Results: Our method achieves up to a 42% improvement in safety metrics across mainstream LLMs, significantly strengthening adversarial robustness while reducing false rejection rates to clinically viable thresholds—demonstrating both efficacy and cross-architecture transferability in clinical dialogue systems.
📝 Abstract
Large Language Models (LLMs) are increasingly used in healthcare, yet ensuring their safety and trustworthiness remains a barrier to deployment. Conversational medical assistants must avoid unsafe compliance without over-refusing benign queries. We present an iterative post-deployment alignment framework that applies Kahneman-Tversky Optimization (KTO) and Direct Preference Optimization (DPO) to refine models against domain-specific safety signals. Using the CARES-18K benchmark for adversarial robustness, we evaluate four LLMs (Llama-3B/8B, Meditron-8B, Mistral-7B) across multiple cycles. Our results show up to 42% improvement in safety-related metrics for harmful query detection, alongside interesting trade-offs against erroneous refusals, thereby exposing architecture-dependent calibration biases. We also perform ablation studies to identify when self-evaluation is reliable and when external or finetuned judges are necessary to maximize performance gains. Our findings underscore the importance of adopting best practices that balance patient safety, user trust, and clinical utility in the design of conversational medical assistants.