🤖 AI Summary
Prior work predominantly addresses simple multimodal tasks (e.g., VQA), while image- and video-based dialogue (VisDial and AVSD) have been studied in isolation, lacking unified modeling and cross-domain synergy. This paper introduces the first unified architecture for dual-modality dialogue, decoupling spatial and temporal feature modeling via a multi-expert routing mechanism. We further propose a cross-modal matching-contrast joint alignment framework to systematically uncover and exploit transferability between image- and video-dialogue domains, thereby mitigating domain shift. Our method integrates Mixture-of-Experts (MoE), spatiotemporal feature disentanglement, zero-shot transfer, and fine-tuning within a unified training paradigm. Evaluated on four major benchmarks—VisDial v1.0, VisDial v1.1, AVSD (DSTC7), and AVSD (DSTC8)—our approach establishes new state-of-the-art results across all, significantly advancing multimodal dialogue understanding.
📝 Abstract
We present V$^2$Dial - a novel expert-based model specifically geared towards simultaneously handling image and video input data for multimodal conversational tasks. Current multimodal models primarily focus on simpler tasks (e.g., VQA, VideoQA, video-text retrieval) and often neglect the more challenging conversational counterparts, such as video and visual/image dialog. Moreover, works on both conversational tasks evolved separately from each other despite their apparent similarities limiting their applicability potential. To this end, we propose to unify both tasks using a single model that for the first time jointly learns the spatial and temporal features of images and videos by routing them through dedicated experts and aligns them using matching and contrastive learning techniques. Furthermore, we systemically study the domain shift between the two tasks by investigating whether and to what extent these seemingly related tasks can mutually benefit from their respective training data. Extensive evaluations on the widely used video and visual dialog datasets of AVSD and VisDial show that our model achieves new state-of-the-art results across four benchmarks both in zero-shot and fine-tuning settings.