🤖 AI Summary
Evaluating the real-world clinical utility of generative AI (e.g., ChatGPT, Claude) in helping patients and caregivers understand chest CT reports and images—and thereby supporting shared decision-making—remains challenging due to the lack of human-centered, task-oriented assessment frameworks.
Method: We conducted thematic analysis of authentic clinician–patient dialogues to identify core information needs—including terminology explanation, lesion localization, and prognostic interpretation—and benchmarked model responses against radiologist-derived ground truth across these themes.
Contribution/Results: Our paradigm moves beyond conventional static benchmarks by systematically assessing depth of clinical understanding and decision-support capability. Results reveal significant inter-thematic performance heterogeneity among models and underscore that effective patient-facing AI must accommodate diverse, dynamic, and context-sensitive information interactions. This work establishes the first empirically grounded, human-centered evaluation framework and novel clinical benchmark for trustworthy medical AI.
📝 Abstract
Generative AI systems such as ChatGPT and Claude are built upon language models that are typically evaluated for accuracy on curated benchmark datasets. Such evaluation paradigms measure predictive and reasoning capabilities of language models but do not assess if they can provide information that is useful to people. In this paper, we take some initial steps in developing an evaluation paradigm that centers human understanding and decision-making. We study the utility of generative AI systems in supporting people in a concrete task - making sense of clinical reports and imagery in order to make a clinical decision. We conducted a formative need-finding study in which participants discussed chest computed tomography (CT) scans and associated radiology reports of a fictitious close relative with a cardiothoracic radiologist. Using thematic analysis of the conversation between participants and medical experts, we identified commonly occurring themes across interactions, including clarifying medical terminology, locating the problems mentioned in the report in the scanned image, understanding disease prognosis, discussing the next diagnostic steps, and comparing treatment options. Based on these themes, we evaluated two state-of-the-art generative AI systems against the radiologist's responses. Our results reveal variability in the quality of responses generated by the models across various themes. We highlight the importance of patient-facing generative AI systems to accommodate a diverse range of conversational themes, catering to the real-world informational needs of patients.