🤖 AI Summary
This work addresses the challenge that large language models (LLMs) struggle to comprehend high-level semantic aspects of advertising videos—such as marketing logic, persuasive strategies, and audience engagement. To this end, we introduce AdsQA, the first multi-task benchmark for advertising video understanding, comprising 1,544 ads and 10,962 annotated video segments. We propose ReAd-R, a novel model integrating reinforcement learning, question-reflection mechanisms, and reward-driven optimization to enhance deep reasoning and answer generation in complex advertising contexts. Comprehensive evaluation across 14 mainstream LLMs demonstrates that ReAd-R significantly outperforms strong baselines—including those with advanced chain-of-thought capabilities—achieving state-of-the-art performance. Notably, this study pioneers the use of advertising videos as a rigorous testbed, advancing LLMs from low-level visual perception toward high-order marketing semantic cognition.
📝 Abstract
Large language models (LLMs) have taken a great step towards AGI. Meanwhile, an increasing number of domain-specific problems such as math and programming boost these general-purpose models to continuously evolve via learning deeper expertise. Now is thus the time further to extend the diversity of specialized applications for knowledgeable LLMs, though collecting high quality data with unexpected and informative tasks is challenging. In this paper, we propose to use advertisement (ad) videos as a challenging test-bed to probe the ability of LLMs in perceiving beyond the objective physical content of common visual domain. Our motivation is to take full advantage of the clue-rich and information-dense ad videos' traits, e.g., marketing logic, persuasive strategies, and audience engagement. Our contribution is three-fold: (1) To our knowledge, this is the first attempt to use ad videos with well-designed tasks to evaluate LLMs. We contribute AdsQA, a challenging ad Video QA benchmark derived from 1,544 ad videos with 10,962 clips, totaling 22.7 hours, providing 5 challenging tasks. (2) We propose ReAd-R, a Deepseek-R1 styled RL model that reflects on questions, and generates answers via reward-driven optimization. (3) We benchmark 14 top-tier LLMs on AdsQA, and our exttt{ReAd-R}~achieves the state-of-the-art outperforming strong competitors equipped with long-chain reasoning capabilities by a clear margin.