Assesing LLMs in Art Contexts: Critique Generation and Theory of Mind Evaluation

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates large language models’ (LLMs) higher-order cognitive capabilities in the arts: professional-level art critique generation and theory of mind (ToM) reasoning within artistic contexts. Addressing limitations in existing benchmarks, we introduce three methodological innovations: (1) a stepwise prompting mechanism integrating Noel Carroll’s evaluative framework with pluralist critical theories; (2) the first ToM benchmark specifically designed for art interpretation, affective tension, and moral judgment; and (3) a rigorous evaluation combining prompt engineering, domain-specific metrics, Turing-style blind assessment, and cross-model comparison across 41 state-of-the-art LLMs. Results demonstrate that AI-generated critiques achieve near-expert human performance under blind evaluation; LLMs exhibit significant performance divergence on affective and ambiguous ToM tasks; and fine-grained prompting effectively elicits understanding-like behavior—providing critical empirical evidence for resolving the “generative AI paradox.”

Technology Category

Application Category

📝 Abstract
This study explored how large language models (LLMs) perform in two areas related to art: writing critiques of artworks and reasoning about mental states (Theory of Mind, or ToM) in art-related situations. For the critique generation part, we built a system that combines Noel Carroll's evaluative framework with a broad selection of art criticism theories. The model was prompted to first write a full-length critique and then shorter, more coherent versions using a step-by-step prompting process. These AI-generated critiques were then compared with those written by human experts in a Turing test-style evaluation. In many cases, human subjects had difficulty telling which was which, and the results suggest that LLMs can produce critiques that are not only plausible in style but also rich in interpretation, as long as they are carefully guided. In the second part, we introduced new simple ToM tasks based on situations involving interpretation, emotion, and moral tension, which can appear in the context of art. These go beyond standard false-belief tests and allow for more complex, socially embedded forms of reasoning. We tested 41 recent LLMs and found that their performance varied across tasks and models. In particular, tasks that involved affective or ambiguous situations tended to reveal clearer differences. Taken together, these results help clarify how LLMs respond to complex interpretative challenges, revealing both their cognitive limitations and potential. While our findings do not directly contradict the so-called Generative AI Paradox--the idea that LLMs can produce expert-like output without genuine understanding--they suggest that, depending on how LLMs are instructed, such as through carefully designed prompts, these models may begin to show behaviors that resemble understanding more closely than we might assume.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to generate art critiques
Assessing LLMs' Theory of Mind in art contexts
Comparing AI and human art critique performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Carroll's framework with art criticism theories
Uses step-by-step prompting for coherent critiques
Introduces new ToM tasks for complex reasoning
🔎 Similar Papers
No similar papers found.
Takaya Arita
Takaya Arita
Professor of Complex Systems Science, Graduate School of Informatics, Nagoya University
Artificial Life
W
Wenxian Zheng
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan
R
Reiji Suzuki
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan
F
Fuminori Akiba
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan