🤖 AI Summary
Evaluating and improving the scientific fidelity of domain-specific image generation models—particularly for information-dense, technically rigorous visualizations such as biological schematics, engineering diagrams, and scientific charts—remains an open challenge due to the lack of fine-grained, grounded evaluation benchmarks.
Method: We introduce the first dedicated fine-grained evaluation benchmark comprising 654 real-world reference images, 6,076 domain-specific criteria, and 44,131 hierarchical binary checks. We propose a novel rubric framework that automatically generates interpretable, context- and reference-aware scoring guidelines; an LMM-driven automated adjudication mechanism; and a principled penalty aggregation strategy. Further, we establish an “evaluation–feedback–editing” closed-loop optimization paradigm.
Results: State-of-the-art text-to-image models achieve only 0.791 accuracy and 0.553 rubric score on our benchmark. With rubric-guided iterative editing, accuracy improves to 0.865 and rubric score to 0.697—demonstrating substantial gains in scientific accuracy.
📝 Abstract
We study professional image generation, where a model must synthesize information-dense, scientifically precise illustrations from technical descriptions rather than merely produce visually plausible pictures. To quantify the progress, we introduce ProImage-Bench, a rubric-based benchmark that targets biology schematics, engineering/patent drawings, and general scientific diagrams. For 654 figures collected from real textbooks and technical reports, we construct detailed image instructions and a hierarchy of rubrics that decompose correctness into 6,076 criteria and 44,131 binary checks. Rubrics are derived from surrounding text and reference figures using large multimodal models, and are evaluated by an automated LMM-based judge with a principled penalty scheme that aggregates sub-question outcomes into interpretable criterion scores. We benchmark several representative text-to-image models on ProImage-Bench and find that, despite strong open-domain performance, the best base model reaches only 0.791 rubric accuracy and 0.553 criterion score overall, revealing substantial gaps in fine-grained scientific fidelity. Finally, we show that the same rubrics provide actionable supervision: feeding failed checks back into an editing model for iterative refinement boosts a strong generator from 0.653 to 0.865 in rubric accuracy and from 0.388 to 0.697 in criterion score. ProImage-Bench thus offers both a rigorous diagnostic for professional image generation and a scalable signal for improving specification-faithful scientific illustrations.