🤖 AI Summary
This study addresses the growing misuse of Telegram bots in financial fraud and illicit data trading, an issue lacking systematic investigation despite its societal significance. We present the largest known dataset of Telegram content to date, encompassing 67,000 newly identified channels, 492 million messages, and 32,000 bots. To analyze this ecosystem, we develop an automated interaction framework that integrates snowball sampling, multilingual natural language processing, and network topology modeling to classify bot functionalities and characterize community behaviors. Our analysis reveals dual-use patterns: while bots facilitate legitimate applications such as crowdsourcing, they are also extensively exploited as payment gateways, traffic-generation tools, and interfaces for malicious AI services. These findings provide empirical grounding for platform-level governance and regulatory interventions targeting bot-mediated abuse.
📝 Abstract
Telegram, initially a messaging app, has evolved into a platform where users can interact with various services through programmable applications, bots. Bots provide a wide range of uses, from moderating groups, helping with online shopping, to even executing trades in financial markets. However, Telegram has been increasingly associated with various illicit activities -- financial scams, stolen data, non-consensual image sharing, among others, raising concerns bots may be facilitating these operations. This paper is the first to characterize Telegram bots at scale, through the following contributions. First, we offer the largest general-purpose message dataset and the first bot dataset. Through snowball sampling from two published datasets, we uncover over 67,000 additional channels, 492 million messages, and 32,000 bots. Second, we develop a system to automatically interact with bots in order to extract their functionality. Third, based on their description, chat responses, and the associated channels, we classify bots into several domains. Fourth, we investigate the communities each bot serves, by analyzing supported languages, usage patterns (e.g., duration, reuse), and network topology. While our analysis discovers useful applications such as crowdsourcing, we also identify malicious bots (e.g., used for financial scams, illicit underground services) serving as payment gateways, referral systems, and malicious AI endpoints. By exhorting the research community to look at bots as software infrastructure, this work hopes to foster further research useful to content moderators, and to help interventions against illicit activities.