Beyond End-to-End Video Models: An LLM-Based Multi-Agent System for Educational Video Generation

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing end-to-end video generation models in educational contexts, where issues such as logical inconsistency, visual distortion, and audio-visual desynchronization often lead to failure. To overcome these challenges, we propose LAVES—a hierarchical multi-agent system grounded in large language models—that decomposes educational video generation into subtasks including problem solving, visualization code generation, and instructional script writing. These subtasks are orchestrated by a central coordinator and refined through iterative critique and quality verification mechanisms. By leveraging a structured, executable video scripting paradigm, LAVES enables high-fidelity, fully automated, and low-cost video production. Experimental results demonstrate that LAVES generates over one million videos per day at a cost reduction exceeding 95%, with significantly higher user acceptance rates compared to existing approaches, thereby overcoming longstanding bottlenecks in logical coherence and controllability.

Technology Category

Application Category

📝 Abstract
Although recent end-to-end video generation models demonstrate impressive performance in visually oriented content creation, they remain limited in scenarios that require strict logical rigor and precise knowledge representation, such as instructional and educational media. To address this problem, we propose LAVES, a hierarchical LLM-based multi-agent system for generating high-quality instructional videos from educational problems. The LAVES formulates educational video generation as a multi-objective task that simultaneously demands correct step-by-step reasoning, pedagogically coherent narration, semantically faithful visual demonstrations, and precise audio--visual alignment. To address the limitations of prior approaches--including low procedural fidelity, high production cost, and limited controllability--LAVES decomposes the generation workflow into specialized agents coordinated by a central Orchestrating Agent with explicit quality gates and iterative critique mechanisms. Specifically, the Orchestrating Agent supervises a Solution Agent for rigorous problem solving, an Illustration Agent that produces executable visualization codes, and a Narration Agent for learner-oriented instructional scripts. In addition, all outputs from the working agents are subject to semantic critique, rule-based constraints, and tool-based compilation checks. Rather than directly synthesizing pixels, the system constructs a structured executable video script that is deterministically compiled into synchronized visuals and narration using template-driven assembly rules, enabling fully automated end-to-end production without manual editing. In large-scale deployments, LAVES achieves a throughput exceeding one million videos per day, delivering over a 95% reduction in cost compared to current industry-standard approaches while maintaining a high acceptance rate.
Problem

Research questions and friction points this paper is trying to address.

educational video generation
logical rigor
knowledge representation
audio-visual alignment
instructional media
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based multi-agent system
executable video script
structured educational video generation
orchestrated agent coordination
template-driven assembly
🔎 Similar Papers
No similar papers found.