🤖 AI Summary
Fine-grained video retrieval for contextual advertising faces dual challenges of content explosion and privacy constraints. To address this, we propose a multi-modal expert collaboration architecture that operates without joint training—decoupling the modeling of video, audio, subtitles, and semantic metadata (e.g., objects, actions, emotions), and fusing heterogeneous modal representations for high-precision zero-shot text-to-video retrieval. Our approach overcomes the limitations of conventional single-alignment models, significantly improving semantic alignment accuracy and content controllability while ensuring brand safety and regulatory compliance. Evaluated on multiple standard benchmarks, it achieves or surpasses state-of-the-art performance. Empirical results demonstrate an average 12.7% improvement in retrieval accuracy over unimodal baselines and enable millisecond-level brand-safety filtering. The method has been successfully deployed in an industrial-scale advertising system.
📝 Abstract
Contextual advertising serves ads that are aligned to the content that the user is viewing. The rapid growth of video content on social platforms and streaming services, along with privacy concerns, has increased the need for contextual advertising. Placing the right ad in the right context creates a seamless and pleasant ad viewing experience, resulting in higher audience engagement and, ultimately, better ad monetization. From a technology standpoint, effective contextual advertising requires a video retrieval system capable of understanding complex video content at a very granular level. Current text-to-video retrieval models based on joint multimodal training demand large datasets and computational resources, limiting their practicality and lacking the key functionalities required for ad ecosystem integration. We introduce ContextIQ, a multimodal expert-based video retrieval system designed specifically for contextual advertising. ContextIQ utilizes modality-specific experts-video, audio, transcript (captions), and metadata such as objects, actions, emotion, etc.-to create semantically rich video representations. We show that our system, without joint training, achieves better or comparable results to state-of-the-art models and commercial solutions on multiple text-to-video retrieval benchmarks. Our ablation studies highlight the benefits of leveraging multiple modalities for enhanced video retrieval accuracy instead of using a vision-language model alone. Furthermore, we show how video retrieval systems such as ContextIQ can be used for contextual advertising in an ad ecosystem while also addressing concerns related to brand safety and filtering inappropriate content.