🤖 AI Summary
This study evaluates the capacity of open-source vision-language models (VLMs) to identify privacy-related visual attributes in a zero-shot setting and examines their alignment with human annotations. By comparing model predictions against human-labeled data, the authors find that VLMs consistently exhibit greater sensitivity in detecting privacy-sensitive content than human annotators. Notably, when multiple VLMs converge on highly consistent predictions, these outputs not only effectively complement human judgments but also uncover privacy-relevant elements overlooked by humans. The findings demonstrate the potential of VLMs for large-scale privacy-aware data annotation and offer a novel pathway toward automated privacy perception in visual content.
📝 Abstract
Visual Language Models (VLMs) are often used for zero-shot detection of visual attributes in the image. We present a zero-shot evaluation of open-source VLMs for privacy-related attribute recognition. We identify the attributes for which VLMs exhibit strong inter-annotator agreement, and discuss the disagreement cases of human and VLM annotations. Our results show that when evaluated against human annotations, VLMs tend to predict the presence of privacy attributes more often than human annotators. In addition to this, we find that in cases of high inter-annotator agreement between VLMs, they can complement human annotation by identifying attributes overlooked by human annotators. This highlights the potential of VLMs to support privacy annotations in large-scale image datasets.