🤖 AI Summary
Existing approaches to automatically generating multimodal presentations from academic papers suffer from fragmented pipelines, semantic inconsistencies, and low efficiency. This work proposes PaperX, a unified framework that introduces the Scholar DAG—a novel structured intermediate representation—to decouple a paper’s logical structure from its presentation syntax. By leveraging an adaptive graph traversal strategy, PaperX enables the generation of diverse, high-quality presentation content from a single source while preserving semantic fidelity. The framework integrates structural parsing and rendering into a cohesive pipeline, significantly enhancing both semantic consistency and computational efficiency. Empirical results demonstrate that PaperX achieves state-of-the-art performance in content accuracy and visual aesthetics, outperforming specialized single-task models while reducing computational overhead.
📝 Abstract
Transforming scientific papers into multimodal presentation content is essential for research dissemination but remains labor intensive. Existing automated solutions typically treat each format as an isolated downstream task, leading to redundant processing and semantic inconsistency. We introduce PaperX, a unified framework that models academic presentation generation as a structural transformation and rendering process. Central to our approach is the Scholar DAG, an intermediate representation that decouples the paper's logical structure from its final presentation syntax. By applying adaptive graph traversal strategies, PaperX generates diverse, high quality outputs from a single source. Comprehensive evaluations demonstrate that our framework achieves the state of the art performance in content fidelity and aesthetic quality while significantly improving cost efficiency compared to specialized single task agents.