Are Multimodal Large Language Models Good Annotators for Image Tagging?

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost of manual image annotation by proposing TagLLM, a framework that leverages multimodal large language models (MLLMs) for automated image tagging. TagLLM generates candidate labels through structured grouping prompts and refines them via an interactive semantic calibration mechanism to resolve ambiguities and enhance label quality. Experimental results demonstrate that TagLLM reduces annotation costs to one-thousandth of those associated with human labeling while achieving over 90% of the downstream task performance obtained with human-annotated data. This approach effectively narrows the performance gap between MLLM-generated and human-provided annotations by 60%–80%, highlighting its potential as a scalable and efficient alternative to manual image labeling.

Technology Category

Application Category

📝 Abstract
Image tagging, a fundamental vision task, traditionally relies on human-annotated datasets to train multi-label classifiers, which incurs significant labor and costs. While Multimodal Large Language Models (MLLMs) offer promising potential to automate annotation, their capability to replace human annotators remains underexplored. This paper aims to analyze the gap between MLLM-generated and human annotations and to propose an effective solution that enables MLLM-based annotation to replace manual labeling. Our analysis of MLLM annotations reveals that, under a conservative estimate, MLLMs can reduce annotation cost to as low as one-thousandth of the human cost, mainly accounting for GPU usage, which is nearly negligible compared to manual efforts. Their annotation quality reaches about 50\% to 80\% of human performance, while achieving over 90\% performance on downstream training tasks.Motivated by these findings, we propose TagLLM, a novel framework for image tagging, which aims to narrow the gap between MLLM-generated and human annotations. TagLLM comprises two components: Candidates generation, which employs structured group-wise prompting to efficiently produce a compact candidate set that covers as many true labels as possible while reducing subsequent annotation workload; and label disambiguation, which interactively calibrates the semantic concept of categories in the prompts and effectively refines the candidate labels. Extensive experiments show that TagLLM substantially narrows the gap between MLLM-generated and human annotations, especially in downstream training performance, where it closes about 60\% to 80\% of the difference.
Problem

Research questions and friction points this paper is trying to address.

image tagging
multimodal large language models
annotation cost
human annotation
label quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Large Language Models
Image Tagging
Automated Annotation
Prompt Engineering
Label Disambiguation
🔎 Similar Papers
No similar papers found.