Can Generative AI Support Patients'&Caregivers' Informational Needs? Towards Task-Centric Evaluation Of AI Systems

📅 2024-01-31
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating the real-world clinical utility of generative AI (e.g., ChatGPT, Claude) in helping patients and caregivers understand chest CT reports and images—and thereby supporting shared decision-making—remains challenging due to the lack of human-centered, task-oriented assessment frameworks. Method: We conducted thematic analysis of authentic clinician–patient dialogues to identify core information needs—including terminology explanation, lesion localization, and prognostic interpretation—and benchmarked model responses against radiologist-derived ground truth across these themes. Contribution/Results: Our paradigm moves beyond conventional static benchmarks by systematically assessing depth of clinical understanding and decision-support capability. Results reveal significant inter-thematic performance heterogeneity among models and underscore that effective patient-facing AI must accommodate diverse, dynamic, and context-sensitive information interactions. This work establishes the first empirically grounded, human-centered evaluation framework and novel clinical benchmark for trustworthy medical AI.

Technology Category

Application Category

📝 Abstract
Generative AI systems such as ChatGPT and Claude are built upon language models that are typically evaluated for accuracy on curated benchmark datasets. Such evaluation paradigms measure predictive and reasoning capabilities of language models but do not assess if they can provide information that is useful to people. In this paper, we take some initial steps in developing an evaluation paradigm that centers human understanding and decision-making. We study the utility of generative AI systems in supporting people in a concrete task - making sense of clinical reports and imagery in order to make a clinical decision. We conducted a formative need-finding study in which participants discussed chest computed tomography (CT) scans and associated radiology reports of a fictitious close relative with a cardiothoracic radiologist. Using thematic analysis of the conversation between participants and medical experts, we identified commonly occurring themes across interactions, including clarifying medical terminology, locating the problems mentioned in the report in the scanned image, understanding disease prognosis, discussing the next diagnostic steps, and comparing treatment options. Based on these themes, we evaluated two state-of-the-art generative AI systems against the radiologist's responses. Our results reveal variability in the quality of responses generated by the models across various themes. We highlight the importance of patient-facing generative AI systems to accommodate a diverse range of conversational themes, catering to the real-world informational needs of patients.
Problem

Research questions and friction points this paper is trying to address.

Evaluating generative AI for patient and caregiver informational needs.
Assessing AI's ability to clarify medical reports and imagery.
Comparing AI responses to radiologist's advice on clinical decisions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed task-centric AI evaluation paradigm
Assessed AI utility in clinical decision-making
Identified key themes for patient-facing AI systems
🔎 Similar Papers
No similar papers found.
S
Shreya Rajagopal
University of Michigan, USA
J
Jae Ho Sohn
University of California, San Francisco, USA
H
Hari Subramonyam
Stanford University, USA
Shiwali Mohan
Shiwali Mohan
AI Scientist
Artificial IntelligenceAgentsMulti-Agent SystemsAgent ArchitecturesCognitive Science