🤖 AI Summary
Multimodal keyword prediction (MMKP) faces three key challenges: missing modalities, unseen phrases, and evaluation bias—where existing methods lack robustness and mainstream benchmarks overestimate performance due to train-test set overlap. To address these, we propose Dynamic Chain-of-Thought (Dynamic CoT), a novel training framework that adaptively injects high-quality reasoning samples to mitigate “overthinking” and enhance complex reasoning in compact models. We further introduce a progressive training paradigm—zero-shot initialization → supervised fine-tuning → CoT distillation—to strengthen vision-language models’ cross-modal semantic modeling and generation capabilities. Additionally, we construct a de-duplicated, low-overlap benchmark to enable calibrated evaluation. Our approach achieves significant improvements over state-of-the-art methods across multiple datasets. The code is publicly available.
📝 Abstract
Multi-modal keyphrase prediction (MMKP) aims to advance beyond text-only methods by incorporating multiple modalities of input information to produce a set of conclusive phrases. Traditional multi-modal approaches have been proven to have significant limitations in handling the challenging absence and unseen scenarios. Additionally, we identify shortcomings in existing benchmarks that overestimate model capability due to significant overlap in training tests. In this work, we propose leveraging vision-language models (VLMs) for the MMKP task. Firstly, we use two widely-used strategies, e.g., zero-shot and supervised fine-tuning (SFT) to assess the lower bound performance of VLMs. Next, to improve the complex reasoning capabilities of VLMs, we adopt Fine-tune-CoT, which leverages high-quality CoT reasoning data generated by a teacher model to finetune smaller models. Finally, to address the "overthinking" phenomenon, we propose a dynamic CoT strategy which adaptively injects CoT data during training, allowing the model to flexibly leverage its reasoning capabilities during the inference stage. We evaluate the proposed strategies on various datasets and the experimental results demonstrate the effectiveness of the proposed approaches. The code is available at https://github.com/bytedance/DynamicCoT.