ChArtist: Generating Pictorial Charts with Unified Spatial and Subject Control

๐Ÿ“… 2026-03-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing methods struggle to generate high-quality pictorial charts that simultaneously preserve data fidelity and flexibly integrate visual style with chart structure. This work proposes ChArtist, a Diffusion Transformerโ€“based model for pictorial chart generation that leverages a skeletal representation for spatial control, incorporates reference images for thematic guidance, and introduces adaptive positional encoding alongside a spatially gated attention mechanism to harmonize data accuracy with visual aesthetics. The authors also construct the first large-scale dataset of pictorial chart triplets and propose a unified metric for evaluating data accuracy. Experimental results demonstrate that ChArtist effectively retains the visual characteristics of reference images while maintaining strict adherence to underlying data, thereby validating the efficacy of task-specific representations in data-driven visual storytelling.

Technology Category

Application Category

๐Ÿ“ Abstract
A pictorial chart is an effective medium for visual storytelling, seamlessly integrating visual elements with data charts. However, creating such images is challenging because the flexibility of visual elements often conflicts with the rigidity of chart structures. This process thus requires a creative deformation that maintains both data faithfulness and visual aesthetics. Current methods that extract dense structural cues from natural images (e.g., edge or depth maps) are ill-suited as conditioning signals for pictorial chart generation. We present ChArtist, a domain-specific diffusion model for generating pictorial charts automatically, offering two distinct types of control: 1) spatial control that aligns well with the chart structure, and 2) subject-driven control that respects the visual characteristics of a reference image. To achieve this, we introduce a skeleton-based spatial control representation. This representation encodes only the data-encoding information of the chart, allowing for the easy incorporation of reference visuals without a rigid outline constraint. We implement our method based on the Diffusion Transformer (DiT) and leverage an adaptive position encoding mechanism to manage these two controls. We further introduce Spatially Gated Attention to modulate the interaction between spatial control and subject control. To support the fine-tuning of pre-trained models for this task, we created a large-scale dataset of 30,000 triplets (skeleton, reference image, pictorial chart). We also propose a unified data accuracy metric to evaluate the data faithfulness of the generated charts. We believe this work demonstrates that current generative models can achieve data-driven visual storytelling by moving beyond general-purpose conditions to task-specific representations. Project page: https://chartist-ai.github.io/.
Problem

Research questions and friction points this paper is trying to address.

pictorial charts
data faithfulness
visual aesthetics
spatial control
subject control
Innovation

Methods, ideas, or system contributions that make the work stand out.

pictorial chart generation
skeleton-based spatial control
subject-driven control
diffusion transformer
spatially gated attention
๐Ÿ”Ž Similar Papers
No similar papers found.