The Imitation Game: Using Large Language Models as Chatbots to Combat Chat-Based Cybercrimes

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional static rule-based and shallow-content filtering approaches fail to effectively detect trust- and psychological manipulation–driven scams in real-time chat platforms. Method: This paper introduces the first large language model (LLM)–based proactive, interactive decoy agent framework, transforming LLMs from passive classifiers into simulated victims engaged in adversarial dialogues. It innovatively integrates OCR-enhanced multimodal payment information parsing with a dynamic dialogue-based deception detection mechanism. Contribution/Results: Deployed via Telegram API, the system conducted real-world adversarial interactions with 53 perpetrators across 98 illicit video-scam groups. Over 56% of its dialogues remained undetected as decoys, successfully uncovering payment pathways, upselling tactics, and cross-platform migration patterns. The framework significantly enhances proactive detection and forensic traceability of covert social engineering attacks.

Technology Category

Application Category

📝 Abstract
Chat-based cybercrime has emerged as a pervasive threat, with attackers leveraging real-time messaging platforms to conduct scams that rely on trust-building, deception, and psychological manipulation. Traditional defense mechanisms, which operate on static rules or shallow content filters, struggle to identify these conversational threats, especially when attackers use multimedia obfuscation and context-aware dialogue. In this work, we ask a provocative question inspired by the classic Imitation Game: Can machines convincingly pose as human victims to turn deception against cybercriminals? We present LURE (LLM-based User Response Engagement), the first system to deploy Large Language Models (LLMs) as active agents, not as passive classifiers, embedded within adversarial chat environments. LURE combines automated discovery, adversarial interaction, and OCR-based analysis of image-embedded payment data. Applied to the setting of illicit video chat scams on Telegram, our system engaged 53 actors across 98 groups. In over 56 percent of interactions, the LLM maintained multi-round conversations without being noticed as a bot, effectively "winning" the imitation game. Our findings reveal key behavioral patterns in scam operations, such as payment flows, upselling strategies, and platform migration tactics.
Problem

Research questions and friction points this paper is trying to address.

Detect conversational cybercrime scams using LLMs
Engage attackers in chat to expose their tactics
Analyze scam behavior patterns through automated interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs as active agents in adversarial chat environments
Automated discovery with OCR-based image payment analysis
Multi-round deception to reveal scam behavioral patterns
🔎 Similar Papers
No similar papers found.