Self-reflection in Automated Qualitative Coding: Improving Text Annotation through Secondary LLM Critique

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high false positive rates of large language models (LLMs) in zero- or few-shot qualitative coding, particularly stemming from “definition misinterpretation” and “meta-discussion confusion,” which severely degrade performance on low-accuracy categories. To mitigate this, the authors propose a two-stage automated coding framework: in the first stage, an LLM applies labels based on a human-curated codebook; in the second, a critic LLM performs targeted self-reflection by re-evaluating positive labels using both the original text and the initial reasoning trace. Guided by human validation, the approach incorporates codebook adaptation, specialized critique rules, and F1-oriented evaluation. Evaluated on a dataset of 3,000 emails across six coding categories, the method improves F1 scores by 0.04–0.25, with two previously low-performing categories rising significantly from 0.52/0.55 to 0.69/0.79, effectively suppressing noise and restoring classifier usability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) allow for sophisticated qualitative coding of large datasets, but zero- and few-shot classifiers can produce an intolerable number of errors, even with careful, validated prompting. We present a simple, generalizable two-stage workflow: an LLM applies a human-designed, LLM-adapted codebook; a secondary LLM critic performs self-reflection on each positive label by re-reading the source text alongside the first model's rationale and issuing a final decision. We evaluate this approach on six qualitative codes over 3,000 high-content emails from Apache Software Foundation project evaluation discussions. Our human-derived audit of 360 positive annotations (60 passages by six codes) found that the first-line LLM had a false-positive rate of 8% to 54%, despite F1 scores of 0.74 and 1.00 in testing. Subsequent recoding of all stage-one annotations via a second self-reflection stage improved F1 by 0.04 to 0.25, bringing two especially poor performing codes up to 0.69 and 0.79 from 0.52 and 0.55 respectively. Our manual evaluation identified two recurrent error classes: misinterpretation (violations of code definitions) and meta-discussion (debate about a project evaluation criterion mistaken for its use as a decision justification). Code-specific critic clauses addressing observed failure modes were especially effective with testing and refinement, replicating the codebook-adaption process for LLM interpretation in stage-one. We explain how favoring recall in first-line LLM annotation combined with secondary critique delivers precision-first, compute-light control. With human guidance and validation, self-reflection slots into existing LLM-assisted annotation pipelines to reduce noise and potentially salvage unusable classifiers.
Problem

Research questions and friction points this paper is trying to address.

qualitative coding
large language models
false positives
annotation errors
self-reflection
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-reflection
qualitative coding
LLM critique
two-stage annotation
false-positive reduction
Z
Zackary Okun Dunivin
University of Stuttgart
M
Mobina Noori
University of California Davis
Seth Frey
Seth Frey
Associate Professor, University of California, Davis
computational institutionsthe commonssocial complexitycognitive scienceonline community
C
Curtis Atkinson
University of Washington