🤖 AI Summary
Current large language models (LLMs) face four critical fragmentation challenges in thematic analysis of unstructured clinical text: heterogeneous analytical objectives, inconsistent benchmark datasets, nonstandardized prompting strategies, and absent evaluation criteria—severely impeding cross-study comparability and clinical deployment. Through a systematic literature review, in-depth interviews with practicing clinicians, and multi-strategy prompting experiments, this study identifies *evaluation heterogeneity* as the primary bottleneck. We thus propose a novel three-dimensional standardized evaluation framework centered on *effectiveness*, *reliability*, and *interpretability*, integrating automated semantic similarity metrics with qualitative expert validation. To our knowledge, this is the first reproducible, comparable, and interpretable evaluation paradigm specifically designed for LLM-based clinical thematic analysis. The framework substantially enhances methodological rigor, enables robust cross-model comparison, and strengthens translational relevance for real-world clinical applications.
📝 Abstract
This position paper examines how large language models (LLMs) can support thematic analysis of unstructured clinical transcripts, a widely used but resource-intensive method for uncovering patterns in patient and provider narratives. We conducted a systematic review of recent studies applying LLMs to thematic analysis, complemented by an interview with a practicing clinician. Our findings reveal that current approaches remain fragmented across multiple dimensions including types of thematic analysis, datasets, prompting strategies and models used, most notably in evaluation. Existing evaluation methods vary widely (from qualitative expert review to automatic similarity metrics), hindering progress and preventing meaningful benchmarking across studies. We argue that establishing standardized evaluation practices is critical for advancing the field. To this end, we propose an evaluation framework centered on three dimensions: validity, reliability, and interpretability.