Published several papers on LLMs' performance in the Turing test, deception and persuasion, and theory of mind tasks. Examples include: 'Large Language Models Pass the Turing Test' (2025), 'Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models' (2024), etc.
Research Experience
Assistant Professor, Dept of Psychology, Stony Brook University
Background
Interested in the intersection between psychology and AI. Recent work has focused on the potential of LLMs to persuade or manipulate people, as well as evaluating LLMs at the False Belief task, other theory of mind tasks, and the Turing test.
Miscellany
Engaged in media activities discussing topics such as whether AI passed the Turing test, understanding, grounding, and reference in LLMs. Led several projects including exploring if LLMs can pass the Turing test, whether humans can outperform GPT-2 in language modeling, and more.