🤖 AI Summary
While current multimodal large language models (MLLMs) excel at visual recognition, they struggle to model subjective human perceptual attributes of images—such as memorability,趣味性 (engagingness), aesthetic appeal, and emotional resonance.
Method: We introduce CogIP-Bench, the first systematic benchmark for evaluating image-level cognitive properties, enabling quantifiable, transferable, and application-oriented cognitive alignment. Our approach proposes a multimodal large model post-training paradigm leveraging large-scale human perceptual annotations to achieve cognitive alignment, and integrates this capability into diffusion-based image generation for perception-guided synthesis.
Results: Experiments demonstrate significant improvements in predicting subjective attributes—including memorability, aesthetic quality, and emotion elicitation—and successful transfer of alignment to generative tasks, yielding images with enhanced visual appeal and stronger emotional resonance. This work advances MLLMs from “seeing” to “understanding” human perception, fostering human-centered AI development.
📝 Abstract
While Multimodal Large Language Models (MLLMs) are adept at answering what is in an image-identifying objects and describing scenes-they often lack the ability to understand how an image feels to a human observer. This gap is most evident when considering subjective cognitive properties, such as what makes an image memorable, funny, aesthetically pleasing, or emotionally evocative. To systematically address this challenge, we introduce CogIP-Bench, a comprehensive benchmark for evaluating MLLMs on such image cognitive properties. Our evaluation reveals a significant gap: current models are poorly aligned with human perception of these nuanced properties. We then demonstrate that a post-training phase can effectively bridge this gap, significantly enhancing the model's alignment with human judgments. Furthermore, we show that this learned cognitive alignment is not merely predictive but also transferable to downstream creative tasks. By integrating our cognitively-aligned MLLM into an image generation pipeline, we can guide the synthesis process to produce images that better embody desired traits, such as being more memorable or visually appealing. Our work provides a benchmark to measure this human-like perception, a post-training pipeline to enhance it, and a demonstration that this alignment unlocks more human-centric AI.