Dual Thinking and Logical Processing -- Are Multi-modal Large Language Models Closing the Gap with Human Vision ?

📅 2024-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models can approximate human visual cognition, focusing on two core visual capabilities: intuitive perception and logical reasoning. Methodologically, we construct an adversarial image dataset and integrate psychophysical experiments, error analysis of segmentation models, and cross-model behavioral comparisons across multimodal large language models (e.g., LLaVA, Qwen-VL). Notably, we introduce dual-system cognitive theory—originally from psychology—into visual AI evaluation for the first time. Results reveal that humans rely predominantly on shape cues for rapid inference; mainstream segmentation models lack compositional understanding of object substructures; and while multimodal LLMs substantially improve logical reasoning and part-based recognition, their performance remains far below human levels. The key contribution is demonstrating that current models only emulate the intuitive (System 1) processing pathway, whereas multimodal architectures partially recover the logical (System 2) pathway—establishing a novel cognitive alignment paradigm and empirical benchmark for visual AI.

Technology Category

Application Category

📝 Abstract
The dual thinking framework considers fast, intuitive processing and slower, logical processing. The perception of dual thinking in vision requires images where inferences from intuitive and logical processing differ. We introduce an adversarial dataset to provide evidence for the dual thinking framework in human vision, which also aids in studying the qualitative behavior of deep learning models. The evidence underscores the importance of shape in identifying instances in human vision. Our psychophysical studies show the presence of multiple inferences in rapid succession, and analysis of errors shows the early stopping of visual processing can result in missing relevant information. Our study shows that segmentation models lack an understanding of sub-structures, as indicated by errors related to the position and number of sub-components. Additionally, the similarity in errors made by models and intuitive human processing indicates that models only address intuitive thinking in human vision. In contrast, multi-modal LLMs, including open-source models, demonstrate tremendous progress on errors made in intuitive processing. The models have improved performance on images that require logical reasoning and show recognition of sub-components. However, they have not matched the performance improvements made on errors in intuitive processing.
Problem

Research questions and friction points this paper is trying to address.

Visual Perception
Large Language Models
Cognitive Abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Understanding
Multimodal Information Processing
Cognitive Similarity
🔎 Similar Papers
No similar papers found.
K
Kailas Dayanandan
Indian Institute of Technology Delhi, 110016
A
Anand Sinha
Indian Institute of Technology Delhi, 110016
Brejesh Lall
Brejesh Lall
Professor, Elect. Engg., IIT Delhi
Signal ProcessingImage ProcessingComputer Vision
Nikhil Kumar
Nikhil Kumar
University of Waterloo
AlgorithmsDiscrete Mathematics