🤖 AI Summary
To address the high labor cost and low efficiency in dense image captioning annotation, this paper proposes an AI-in-the-loop chained collaborative annotation framework. It introduces a “residual” annotation paradigm wherein multiple annotators sequentially supplement uncovered visual semantic units (i.e., object–attribute trees), integrated with a read-speak-coordinated multimodal human–AI interface: speech input drives real-time transcription and structured post-processing, systematically embedding cognitive characteristics of human reading comprehension and spoken expression. In an 8-participant user study, the framework achieves an annotation throughput of 0.42 units/second—40% higher than a parallel baseline—and attains a Recall@10 of 41.13% (+0.61 percentage points) in image–text retrieval. This work is the first to combine cognition-inspired chained residual annotation with speech-augmented multimodal interaction for dense image captioning, significantly improving both annotation scale and semantic comprehensiveness under budget constraints.
📝 Abstract
While densely annotated image captions significantly facilitate the learning of robust vision-language alignment, methodologies for systematically optimizing human annotation efforts remain underexplored. We introduce Chain-of-Talkers (CoTalk), an AI-in-the-loop methodology designed to maximize the number of annotated samples and improve their comprehensiveness under fixed budget constraints (e.g., total human annotation time). The framework is built upon two key insights. First, sequential annotation reduces redundant workload compared to conventional parallel annotation, as subsequent annotators only need to annotate the ``residual'' -- the missing visual information that previous annotations have not covered. Second, humans process textual input faster by reading while outputting annotations with much higher throughput via talking; thus a multimodal interface enables optimized efficiency. We evaluate our framework from two aspects: intrinsic evaluations that assess the comprehensiveness of semantic units, obtained by parsing detailed captions into object-attribute trees and analyzing their effective connections; extrinsic evaluation measures the practical usage of the annotated captions in facilitating vision-language alignment. Experiments with eight participants show our Chain-of-Talkers (CoTalk) improves annotation speed (0.42 vs. 0.30 units/sec) and retrieval performance (41.13% vs. 40.52%) over the parallel method.