π€ AI Summary
Existing Wikipedia-style article generation methods primarily focus on text-only outputs, overlooking the critical role of multimodal text-image synergy in enhancing informational depth and readability. To address this, we propose the first fully automated multimodal Wikipedia generation framework, integrating multi-agent collaborative retrieval, cross-modal content alignment, and joint text-image generation. We further introduce a novel multi-perspective self-reflection mechanism to improve factual accuracy and content breadth. To standardize evaluation, we construct WikiSeekβthe first benchmark explicitly designed for text-image co-referential Wikipedia generation. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art baselines on WikiSeek (+8%β29% across key metrics), producing articles with superior factuality, logical coherence, and visual richness. This work establishes a new paradigm for multimodal knowledge-content generation.
π Abstract
Knowledge discovery and collection are intelligence-intensive tasks that traditionally require significant human effort to ensure high-quality outputs. Recent research has explored multi-agent frameworks for automating Wikipedia-style article generation by retrieving and synthesizing information from the internet. However, these methods primarily focus on text-only generation, overlooking the importance of multimodal content in enhancing informativeness and engagement. In this work, we introduce WikiAutoGen, a novel system for automated multimodal Wikipedia-style article generation. Unlike prior approaches, WikiAutoGen retrieves and integrates relevant images alongside text, enriching both the depth and visual appeal of generated content. To further improve factual accuracy and comprehensiveness, we propose a multi-perspective self-reflection mechanism, which critically assesses retrieved content from diverse viewpoints to enhance reliability, breadth, and coherence, etc. Additionally, we introduce WikiSeek, a benchmark comprising Wikipedia articles with topics paired with both textual and image-based representations, designed to evaluate multimodal knowledge generation on more challenging topics. Experimental results show that WikiAutoGen outperforms previous methods by 8%-29% on our WikiSeek benchmark, producing more accurate, coherent, and visually enriched Wikipedia-style articles. We show some of our generated examples in https://wikiautogen.github.io/ .