What if Deception Cannot be Detected? A Cross-Linguistic Study on the Limits of Deception Detection from Text

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the generalizability of text deception detection, arguing that prior results may stem from data collection biases. To address this, we propose a belief-misalignment–based definition of deception and introduce DeFaBel—the first controlled, multilingual, multi-condition deception corpus explicitly designed to manipulate and annotate belief states, accompanied by a rigorous belief annotation protocol. Methodologically, we systematically assess the true statistical association between linguistic cues and deception labels, evaluating both feature-engineered models and pre-trained/instruction-tuned large language models (LLMs) across diverse experimental settings. Results show: (1) no significant correlation between traditional linguistic cues and deception labels across all DeFaBel variants; (2) state-of-the-art models perform near chance; and (3) cross-dataset cue effect sizes are negligible. Our findings expose the fragility of current text deception detection paradigms and establish belief-driven modeling and controllable corpus construction as essential new directions.

Technology Category

Application Category

📝 Abstract
Can deception be detected solely from written text? Cues of deceptive communication are inherently subtle, even more so in text-only communication. Yet, prior studies have reported considerable success in automatic deception detection. We hypothesize that such findings are largely driven by artifacts introduced during data collection and do not generalize beyond specific datasets. We revisit this assumption by introducing a belief-based deception framework, which defines deception as a misalignment between an author's claims and true beliefs, irrespective of factual accuracy, allowing deception cues to be studied in isolation. Based on this framework, we construct three corpora, collectively referred to as DeFaBel, including a German-language corpus of deceptive and non-deceptive arguments and a multilingual version in German and English, each collected under varying conditions to account for belief change and enable cross-linguistic analysis. Using these corpora, we evaluate commonly reported linguistic cues of deception. Across all three DeFaBel variants, these cues show negligible, statistically insignificant correlations with deception labels, contrary to prior work that treats such cues as reliable indicators. We further benchmark against other English deception datasets following similar data collection protocols. While some show statistically significant correlations, effect sizes remain low and, critically, the set of predictive cues is inconsistent across datasets. We also evaluate deception detection using feature-based models, pretrained language models, and instruction-tuned large language models. While some models perform well on established deception datasets, they consistently perform near chance on DeFaBel. Our findings challenge the assumption that deception can be reliably inferred from linguistic cues and call for rethinking how deception is studied and modeled in NLP.
Problem

Research questions and friction points this paper is trying to address.

Can deception be detected from text alone?
Do linguistic cues reliably indicate deception across datasets?
Is current deception detection generalizable or dataset-specific?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Belief-based deception framework isolates cues
DeFaBel corpora enable cross-linguistic analysis
Models perform near chance on new corpora
🔎 Similar Papers
No similar papers found.