🤖 AI Summary
Existing multimodal large language models (MLLMs) exhibit weak cross-modal attention and insufficient interaction understanding in multi-speaker video social interactions due to misalignment between visual and textual tokens—particularly the lack of speaker-consistent grounding. To address this, we propose a parameter-free dynamic cross-modal head selection mechanism coupled with an adaptive social-aware attention bias. Our approach integrates attention analysis, vision–language alignment modeling, and speaker spatial localization cues—without modifying model architecture or introducing trainable parameters—thereby significantly enhancing speaker-specific alignment with both verbal and nonverbal behavioral cues. Evaluated on three major benchmarks—TVQA+, MMSI, and OnlineMMSI—our method achieves state-of-the-art performance when integrated into LLaVA-NeXT-Video, Qwen2.5-VL, and InternVL3. Attention visualization confirms precise focus on speaker-relevant visual regions, validating improved cross-modal coherence and social interaction comprehension.
📝 Abstract
Understanding social interaction in video requires reasoning over a dynamic interplay of verbal and non-verbal cues: who is speaking, to whom, and with what gaze or gestures. While Multimodal Large Language Models (MLLMs) are natural candidates, simply adding visual inputs yields surprisingly inconsistent gains on social tasks. Our quantitative analysis of cross-modal attention inside state-of-the-art MLLMs reveals a core failure mode: in multi-speaker scenes, visual and textual tokens lack speaker-consistent alignment, exhibiting substantially weaker cross-modal attention than in object-centric images. To address this, we propose a multimodal multi-speaker attention alignment method that can be integrated into existing MLLMs. First, we introduce dynamic cross-modal head selection to identify attention heads most responsible for grounding. Then, an adaptive social-aware attention bias, computed from existing attention patterns and speaker locations, is injected into the attention mechanism. This bias reinforces alignment between a speaker's visual representation and their utterances without introducing trainable parameters or architectural changes. We integrate our method into three distinct MLLMs (LLaVA-NeXT-Video, Qwen2.5-VL, and InternVL3) and evaluate on three benchmarks (TVQA+, MMSI, OnlineMMSI). Across four social tasks, results demonstrate that our approach improves the ability of MLLMs and achieves state-of-the-art results. Attention visualizations confirm our method successfully focuses the model on speaker-relevant regions, enabling more robust multi-party social reasoning. Our implementation and model will be available at https://github.com/ut-vision/SocialInteraction.