PersonaVlog: Personalized Multimodal Vlog Generation with Multi-Agent Collaboration and Iterative Self-Correction

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current automated Vlog generation methods rely on predefined scripts, lacking dynamism and personalized expression. To address this, we propose a personalized multimodal Vlog generation framework based on multi-agent collaboration and iterative self-correction. Our approach builds a heterogeneous agent system grounded in multimodal large language models (MLLMs), wherein agents collaboratively perform theme understanding, image-conditioned video generation, background music selection, and inner-monologue speech synthesis. We further introduce a feedback-driven rollback mechanism and ThemeVlogEval—a theme-oriented automatic evaluation framework—to enable dynamic content optimization and personalized alignment. Experiments demonstrate that our method significantly outperforms state-of-the-art approaches across diversity, consistency, and personal expressiveness. It offers practical advantages in efficiency, controllability, and scalability.

Technology Category

Application Category

📝 Abstract
With the growing demand for short videos and personalized content, automated Video Log (Vlog) generation has become a key direction in multimodal content creation. Existing methods mostly rely on predefined scripts, lacking dynamism and personal expression. Therefore, there is an urgent need for an automated Vlog generation approach that enables effective multimodal collaboration and high personalization. To this end, we propose PersonaVlog, an automated multimodal stylized Vlog generation framework that can produce personalized Vlogs featuring videos, background music, and inner monologue speech based on a given theme and reference image. Specifically, we propose a multi-agent collaboration framework based on Multimodal Large Language Models (MLLMs). This framework efficiently generates high-quality prompts for multimodal content creation based on user input, thereby improving the efficiency and creativity of the process. In addition, we incorporate a feedback and rollback mechanism that leverages MLLMs to evaluate and provide feedback on generated results, thereby enabling iterative self-correction of multimodal content. We also propose ThemeVlogEval, a theme-based automated benchmarking framework that provides standardized metrics and datasets for fair evaluation. Comprehensive experiments demonstrate the significant advantages and potential of our framework over several baselines, highlighting its effectiveness and great potential for generating automated Vlogs.
Problem

Research questions and friction points this paper is trying to address.

Automated vlog generation lacks personalization and dynamism
Multimodal content creation needs effective collaboration and high personalization
Existing methods rely on predefined scripts limiting creative expression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent collaboration with MLLMs
Iterative self-correction via feedback mechanism
Theme-based automated benchmarking framework
🔎 Similar Papers
No similar papers found.