🤖 AI Summary
This study addresses the challenge of jointly optimizing factual accuracy, helpfulness, and safety in large language models (LLMs) for medical question answering. We construct a multidimensional evaluation benchmark comprising over one thousand health-related questions. A novel three-dimensional evaluation framework—integrating honesty, helpfulness, and harmlessness—is proposed to systematically assess open-source models including Mistral-7B, BioMistral-7B-DARE, and AlpaCare-13B, while comparing few-shot prompting, domain-specific fine-tuning, and human evaluation. Results show that AlpaCare-13B achieves the highest accuracy (91.7%) and harmlessness score (0.92); BioMistral-7B-DARE attains strong safety (0.90); few-shot prompting improves overall accuracy by 7 percentage points but consistently reduces helpfulness on complex queries. Crucially, we empirically uncover trade-offs among accuracy, safety, and practical utility in clinical settings—providing the first reproducible evaluation paradigm and empirical foundation for trustworthy deployment of medical LLMs.
📝 Abstract
Large Language Models (LLMs) hold significant promise for transforming digital health by enabling automated medical question answering. However, ensuring these models meet critical industry standards for factual accuracy, usefulness, and safety remains a challenge, especially for open-source solutions. We present a rigorous benchmarking framework using a dataset of over 1,000 health questions. We assess model performance across honesty, helpfulness, and harmlessness. Our results highlight trade-offs between factual reliability and safety among evaluated models -- Mistral-7B, BioMistral-7B-DARE, and AlpaCare-13B. AlpaCare-13B achieves the highest accuracy (91.7%) and harmlessness (0.92), while domain-specific tuning in BioMistral-7B-DARE boosts safety (0.90) despite its smaller scale. Few-shot prompting improves accuracy from 78% to 85%, and all models show reduced helpfulness on complex queries, highlighting ongoing challenges in clinical QA.