Generating Storytelling Images with Rich Chains-of-Reasoning

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of generating storytelling images—visual depictions that explicitly encode logically coherent chains of reasoning (CoR) via visual cues—where existing methods suffer from insufficient semantic depth and narrative coherence. To this end, we propose StorytellingPainter, a two-stage framework: (1) leveraging large language models (LLMs) to generate structured story reasoning text rich in causal and temporal relations; and (2) conditioning text-to-image (T2I) models to synthesize high-fidelity visuals aligned with the reasoning chain. We formally define the task, introduce the first comprehensive evaluation benchmark covering semantic complexity, image-text alignment, and generation diversity, and release Mini-Storytellers—a lightweight, efficient variant. Experiments demonstrate substantial improvements in narrative expressiveness and logical consistency, significantly narrowing the performance gap between open-source and proprietary large models across multiple metrics.

Technology Category

Application Category

📝 Abstract
An image can convey a compelling story by presenting rich, logically connected visual clues. These connections form Chains-of-Reasoning (CoRs) within the image, enabling viewers to infer events, causal relationships, and other information, thereby understanding the underlying story. In this paper, we focus on these semantically rich images and define them as Storytelling Images. Such images have diverse applications beyond illustration creation and cognitive screening, leveraging their ability to convey multi-layered information visually and inspire active interpretation. However, due to their complex semantic nature, Storytelling Images are inherently challenging to create, and thus remain relatively scarce. To address this challenge, we introduce the Storytelling Image Generation task, which explores how generative AI models can be leveraged to create such images. Specifically, we propose a two-stage pipeline, StorytellingPainter, which combines the creative reasoning abilities of Large Language Models (LLMs) with the visual synthesis capabilities of Text-to-Image (T2I) models to generate Storytelling Images. Alongside this pipeline, we develop a dedicated evaluation framework comprising three main evaluators: a Semantic Complexity Evaluator, a KNN-based Diversity Evaluator and a Story-Image Alignment Evaluator. Given the critical role of story generation in the Storytelling Image Generation task and the performance disparity between open-source and proprietary LLMs, we further explore tailored training strategies to reduce this gap, resulting in a series of lightweight yet effective models named Mini-Storytellers. Experimental results demonstrate the feasibility and effectiveness of our approaches. The code is available at https://github.com/xiujiesong/StorytellingImageGeneration.
Problem

Research questions and friction points this paper is trying to address.

Generating storytelling images with rich logical connections
Addressing scarcity of complex semantic storytelling images
Combining LLMs and T2I models for image creation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage pipeline combining LLMs and T2I models
Dedicated evaluation framework with three specialized evaluators
Tailored training strategies for lightweight story generation models
🔎 Similar Papers
No similar papers found.
Xiujie Song
Xiujie Song
Shanghai Jiao Tong University
Q
Qi Jia
Shanghai Artificial Intelligence Laboratory, China
S
Shota Watanabe
X-LANCE Lab, School of Computer Science, Shanghai Jiao Tong University, China
X
Xiaoyi Pang
X-LANCE Lab, School of Computer Science, Shanghai Jiao Tong University, China
R
Ruijie Chen
East China University of Technology, China
Mengyue Wu
Mengyue Wu
Shanghai Jiao Tong University
Speech perception and productionaffective computingaudio cognition
Kenny Q. Zhu
Kenny Q. Zhu
University of Texas at Arlington
Natural Language ProcessingArtificial IntelligenceKnowledge EngineeringAnimal Communication