Bridging Human Interpretation and Machine Representation: A Landscape of Qualitative Data Analysis in the LLM Era

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the inconsistent and poorly governed application of large language models (LLMs) in qualitative research, stemming from a lack of clear distinction between levels of meaning-making and modeling commitments. To resolve this, the paper proposes a 4×4 landscape framework that systematically integrates four meaning-making levels—description, categorization, explanation, and theorization—with four modeling commitments: static structure, sequential staging, causal pathways, and feedback dynamics. Through systematic literature mapping and multidimensional categorization, the analysis reveals that current LLM applications predominantly occupy lower-order meaning-making and low-commitment modeling, with scant engagement in explanatory or theoretical inference and dynamic mechanisms. This work establishes the first unified framework to delineate LLM capabilities and gaps in qualitative data analysis, offering a novel agenda and methodological foundation for governable AI-assisted qualitative research.

Technology Category

Application Category

📝 Abstract
LLMs are increasingly used to support qualitative research, yet existing systems produce outputs that vary widely--from trace-faithful summaries to theory-mediated explanations and system models. To make these differences explicit, we introduce a 4$\times$4 landscape crossing four levels of meaning-making (descriptive, categorical, interpretive, theoretical) with four levels of modeling (static structure, stages/timelines, causal pathways, feedback dynamics). Applying the landscape to prior LLM-based automation highlights a strong skew toward low-level meaning and low-commitment representations, with few reliable attempts at interpretive/theoretical inference or dynamical modeling. Based on the revealed gap, we outline an agenda for applying and building LLM-systems that make their interpretive and modeling commitments explicit, selectable, and governable.
Problem

Research questions and friction points this paper is trying to address.

qualitative data analysis
large language models
interpretive inference
theoretical modeling
human-machine representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

qualitative data analysis
large language models
interpretive modeling
meaning-making landscape
governable AI
🔎 Similar Papers
No similar papers found.