Fine-Tuning a Large Vision-Language Model for Artwork's Scoring and Critique

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and subjectivity of human evaluation in assessing artistic creativity, as well as the limited interpretability of existing image-feature-based methods. The authors propose a multi-task fine-tuning framework leveraging Qwen2-VL-7B, which uniquely integrates a five-dimensional structured scoring rubric into system prompts. In a single forward pass, the model simultaneously generates highly accurate creativity scores—achieving a Pearson correlation coefficient exceeding 0.97 and a mean absolute error of approximately 3.95 on a 100-point scale—and produces aligned explanatory comments. Trained on a dataset of 1,000 paintings combining visual inputs, textual descriptions, and expert annotations, the model’s generated critiques achieve an SBERT similarity of 0.798 with human expert evaluations, substantially enhancing both the accuracy and interpretability of automated creativity assessment.

Technology Category

Application Category

📝 Abstract
Assessing artistic creativity is foundational to creativity research and arts education, yet manual scoring (e.g., Torrance Tests of Creative Thinking) is labor-intensive at scale. Prior machine-learning approaches show promise for visual creativity scoring, but many rely mainly on image features and provide limited or no explanatory feedback. We propose a framework for automated creativity assessment of human paintings by fine-tuning the vision-language model Qwen2-VL-7B with multi-task learning. Our dataset contains 1000 human-created paintings scored on a 1-100 scale and paired with a short human-written description (content or artist explanation). Two expert raters evaluated each work using a five-dimension rubric (originality, color, texture, composition, content) and provided written critiques; we use an 80/20 train-test split. We add a lightweight regression head on the visual encoder output so the model can predict a numerical score and generate rubric-aligned feedback in a single forward pass. By embedding the structured rubric and the artwork description in the system prompt, we constrain the generated text to match the quantitative prediction. Experiments show strong accuracy, achieving Pearson r>0.97 and MAE about 3.95 on the 100-point scale. Qualitative evaluation indicates the generated feedback is semantically close to expert critiques (average SBERT cosine similarity = 0.798). The proposed approach bridges computer vision and art assessment and offers a scalable tool for creativity research and classroom feedback.
Problem

Research questions and friction points this paper is trying to address.

artwork scoring
creativity assessment
automated critique
vision-language model
art education
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language model
creativity assessment
multi-task learning
automated art critique
fine-tuning
🔎 Similar Papers
No similar papers found.
Z
Zhehan Zhang
College of Education, Clemson University
M
Meihua Qian
College of Education, Clemson University
L
Li Luo
School of Computing, Clemson University
Siyu Huang
Siyu Huang
Assistant Professor, Clemson University
computer visionmachine learninggenerative models
Chaoyi Zhou
Chaoyi Zhou
Clemson University
3D Vision
R
Ripon Saha
Department of Computer Engineering, Arizona State University
X
Xinxin Song
Dipartimento di Architettura (DiDA), University of Florence