🤖 AI Summary
This study investigates the style specificity of image captioning models and how such stylistic characteristics are transmitted—and subsequently attenuated—during text-to-image generation. The authors propose a neural network–based quantification framework that systematically evaluates stylistic asymmetry across multimodal models through source attribution tasks on both textual and visual modalities. By integrating ablation studies with fine-grained semantic analyses—including details, color, texture, and object distribution—they provide the first quantitative assessment of how faithfully descriptive styles are preserved in generated images. Experimental results reveal a stark discrepancy: while text-based source classification achieves 99.70% accuracy, image-based classification peaks at only 50%, indicating that current text-to-image models struggle to retain essential stylistic attributes from the original captions.
📝 Abstract
In this work, we study idiosyncrasies in the caption models and their downstream impact on text-to-image models. We design a systematic analysis: given either a generated caption or the corresponding image, we train neural networks to predict the originating caption model. Our results show that text classification yields very high accuracy (99.70\%), indicating that captioning models embed distinctive stylistic signatures. In contrast, these signatures largely disappear in the generated images, with classification accuracy dropping to at most 50\% even for the state-of-the-art Flux model. To better understand this cross-modal discrepancy, we further analyze the data and find that the generated images fail to preserve key variations present in captions, such as differences in the level of detail, emphasis on color and texture, and the distribution of objects within a scene. Overall, our classification-based framework provides a novel methodology for quantifying both the stylistic idiosyncrasies of caption models and the prompt-following ability of text-to-image systems.