🤖 AI Summary
Open-domain visual entity recognition (OD-VER) aims to link entities in images to dynamically evolving real-world knowledge bases (e.g., Wikidata), yet faces challenges including abundant unseen entities, severe long-tail distribution, sparse supervision, and high visual-semantic ambiguity. This paper proposes KnowCoL, a knowledge-guided contrastive learning framework that— for the first time—explicitly incorporates Wikidata’s structured relational knowledge into multimodal contrastive learning, constructing a joint embedding space for vision, text, and knowledge graphs to enable zero-shot entity recognition. KnowCoL requires no fine-tuning to generalize to previously unseen entities during training. On the OVEN benchmark, its smallest model achieves a 10.5% absolute gain in accuracy on unseen entities, while using only 1/35 the parameters of current state-of-the-art methods. This significantly improves efficiency and scalability for open-set recognition.
📝 Abstract
Open-domain visual entity recognition aims to identify and link entities depicted in images to a vast and evolving set of real-world concepts, such as those found in Wikidata. Unlike conventional classification tasks with fixed label sets, it operates under open-set conditions, where most target entities are unseen during training and exhibit long-tail distributions. This makes the task inherently challenging due to limited supervision, high visual ambiguity, and the need for semantic disambiguation. In this work, we propose a Knowledge-guided Contrastive Learning (KnowCoL) framework that combines both images and text descriptions into a shared semantic space grounded by structured information from Wikidata. By abstracting visual and textual inputs to a conceptual level, the model leverages entity descriptions, type hierarchies, and relational context to support zero-shot entity recognition. We evaluate our approach on the OVEN benchmark, a large-scale open-domain visual recognition dataset with Wikidata IDs as the label space. Our experiments show that using visual, textual, and structured knowledge greatly improves accuracy, especially for rare and unseen entities. Our smallest model improves the accuracy on unseen entities by 10.5% compared to the state-of-the-art, despite being 35 times smaller.