🤖 AI Summary
This study investigates whether large language models (LLMs) replicate humans’ susceptibility to psychological misconceptions (e.g., the “left-brain/right-brain dominance” myth) and systematically evaluates their resilience to misinformation. Method: We construct a standardized benchmark comprising 50 prevalent psychological misconceptions and employ cognitive-inspired interventions—including retrieval-augmented generation (RAG) and Sway prompting—within multi-turn prompting and controlled experiments to quantify belief bias. Contribution/Results: LLMs exhibit significantly lower initial misconception acceptance rates than human baselines; RAG further reduces misconception endorsement by 42%, demonstrating intrinsic debiasing potential. This work establishes the first evaluation paradigm for psychological misconceptions in LLMs and introduces “machine psychology” as a novel research direction. It provides both theoretical foundations and empirical evidence for designing cognitively robust, trustworthy AI systems.
📝 Abstract
Despite widespread debunking, many psychological myths remain deeply entrenched. This paper investigates whether Large Language Models (LLMs) mimic human behaviour of myth belief and explores methods to mitigate such tendencies. Using 50 popular psychological myths, we evaluate myth belief across multiple LLMs under different prompting strategies, including retrieval-augmented generation and swaying prompts. Results show that LLMs exhibit significantly lower myth belief rates than humans, though user prompting can influence responses. RAG proves effective in reducing myth belief and reveals latent debiasing potential within LLMs. Our findings contribute to the emerging field of Machine Psychology and highlight how cognitive science methods can inform the evaluation and development of LLM-based systems.