Humans are more gullible than LLMs in believing common psychological myths

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) replicate humans’ susceptibility to psychological misconceptions (e.g., the “left-brain/right-brain dominance” myth) and systematically evaluates their resilience to misinformation. Method: We construct a standardized benchmark comprising 50 prevalent psychological misconceptions and employ cognitive-inspired interventions—including retrieval-augmented generation (RAG) and Sway prompting—within multi-turn prompting and controlled experiments to quantify belief bias. Contribution/Results: LLMs exhibit significantly lower initial misconception acceptance rates than human baselines; RAG further reduces misconception endorsement by 42%, demonstrating intrinsic debiasing potential. This work establishes the first evaluation paradigm for psychological misconceptions in LLMs and introduces “machine psychology” as a novel research direction. It provides both theoretical foundations and empirical evidence for designing cognitively robust, trustworthy AI systems.

Technology Category

Application Category

📝 Abstract
Despite widespread debunking, many psychological myths remain deeply entrenched. This paper investigates whether Large Language Models (LLMs) mimic human behaviour of myth belief and explores methods to mitigate such tendencies. Using 50 popular psychological myths, we evaluate myth belief across multiple LLMs under different prompting strategies, including retrieval-augmented generation and swaying prompts. Results show that LLMs exhibit significantly lower myth belief rates than humans, though user prompting can influence responses. RAG proves effective in reducing myth belief and reveals latent debiasing potential within LLMs. Our findings contribute to the emerging field of Machine Psychology and highlight how cognitive science methods can inform the evaluation and development of LLM-based systems.
Problem

Research questions and friction points this paper is trying to address.

Investigates if LLMs mimic human belief in psychological myths
Evaluates methods to reduce myth belief in LLMs
Compares myth belief rates between humans and LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates myth belief across multiple LLMs
Uses retrieval-augmented generation for debiasing
Applies swaying prompts to influence responses
🔎 Similar Papers
No similar papers found.