CPGPrompt: Translating Clinical Guidelines into LLM-Executable Decision Support

πŸ“… 2026-01-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Integrating clinical practice guidelines into AI systems remains challenging due to poor interpretability, low adherence, and limited generalizability of existing approaches. This work proposes the first framework that automatically transforms narrative clinical guidelines into structured decision trees and generates executable prompts for large language models (LLMs). By integrating automated prompt engineering, dynamic reasoning, and synthetic case generation, the framework delivers high-fidelity, interpretable, and cross-domain clinical decision support. Empirical evaluation demonstrates its effectiveness: on a binary specialty referral task, the system achieves F1 scores ranging from 0.85 to 1.00, while on multi-path classification tasks, F1 scores range from 0.47 to 0.77, thereby delineating both its performance capabilities and applicability boundaries.

Technology Category

Application Category

πŸ“ Abstract
Clinical practice guidelines (CPGs) provide evidence-based recommendations for patient care; however, integrating them into Artificial Intelligence (AI) remains challenging. Previous approaches, such as rule-based systems, face significant limitations, including poor interpretability, inconsistent adherence to guidelines, and narrow domain applicability. To address this, we develop and validate CPGPrompt, an auto-prompting system that converts narrative clinical guidelines into large language models (LLMs). Our framework translates CPGs into structured decision trees and utilizes an LLM to dynamically navigate them for patient case evaluation. Synthetic vignettes were generated across three domains (headache, lower back pain, and prostate cancer) and distributed into four categories to test different decision scenarios. System performance was assessed on both binary specialty-referral decisions and fine-grained pathway-classification tasks. The binary specialty referral classification achieved consistently strong performance across all domains (F1: 0.85-1.00), with high recall (1.00 $\pm$ 0.00). In contrast, multi-class pathway assignment showed reduced performance, with domain-specific variations: headache (F1: 0.47), lower back pain (F1: 0.72), and prostate cancer (F1: 0.77). Domain-specific performance differences reflected the structure of each guideline. The headache guideline highlighted challenges with negation handling. The lower back pain guideline required temporal reasoning. In contrast, prostate cancer pathways benefited from quantifiable laboratory tests, resulting in more reliable decision-making.
Problem

Research questions and friction points this paper is trying to address.

Clinical Practice Guidelines
AI Integration
Decision Support
Large Language Models
Guideline Adherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

CPGPrompt
clinical practice guidelines
large language models
decision tree
auto-prompting
πŸ”Ž Similar Papers
No similar papers found.
R
Ruiqi Deng
Information Science (Health Tech), Cornell Tech, New York, NY , USA
G
Geoffrey Martin
Systems Engineering, Cornell University, Ithaca, NY , USA
Tony Wang
Tony Wang
Cornell University
Human-Computer InteractionSocial Computing
Gongbo Zhang
Gongbo Zhang
School of Electronic and Computer Engineering, Peking University
AI for ScienceMachine LearningGenerative Model
Y
Yi Liu
Department of Medicine, Weill Cornell Medicine, New York, NY , USA
Chunhua Weng
Chunhua Weng
Professor, Columbia University
Biomedical InformaticsClinical Research Informatics
Y
Yanshan Wang
Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
J
Justin F. Rousseau
Peter O’Donnell Jr. Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA
Yifan Peng
Yifan Peng
Associate Professor at Weill Cornell Medicine
NLPCVmachine learning