Integrating Video and Text: A Balanced Approach to Multimodal Summary Generation and Evaluation

📅 2025-05-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models struggle to jointly model long-range, multimodal episodic content, and lack evaluation metrics that simultaneously ensure visual and textual factual consistency. This paper proposes a zero-shot video-to-text summarization framework that (1) enables synchronous generation of structured screenplays—integrating keyframes, dialogue, and character information—while performing fine-grained character identification; (2) introduces MFactSum, the first evaluation metric tailored for multimodal summarization, which jointly models factual consistency via visual grounding and textual entailment; and (3) employs an audiovisual–subtitle joint encoder with a structured generation architecture. Evaluated on SummScreen3D, our method outperforms Gemini 1.5: it achieves a 20% improvement in visual information coverage within summaries and operates using only 25% of the original video input length.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) often struggle to balance visual and textual information when summarizing complex multimodal inputs, such as entire TV show episodes. In this paper, we propose a zero-shot video-to-text summarization approach that builds its own screenplay representation of an episode, effectively integrating key video moments, dialogue, and character information into a unified document. Unlike previous approaches, we simultaneously generate screenplays and name the characters in zero-shot, using only the audio, video, and transcripts as input. Additionally, we highlight that existing summarization metrics can fail to assess the multimodal content in summaries. To address this, we introduce MFactSum, a multimodal metric that evaluates summaries with respect to both vision and text modalities. Using MFactSum, we evaluate our screenplay summaries on the SummScreen3D dataset, demonstrating superiority against state-of-the-art VLMs such as Gemini 1.5 by generating summaries containing 20% more relevant visual information while requiring 75% less of the video as input.
Problem

Research questions and friction points this paper is trying to address.

Balancing visual and textual information in multimodal summaries
Generating zero-shot video-to-text summaries with integrated screenplay elements
Evaluating multimodal summaries with a new metric, MFactSum
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot video-to-text summarization with screenplay representation
Simultaneous character naming and screenplay generation
Multimodal evaluation metric MFactSum for summaries
🔎 Similar Papers
No similar papers found.