Poisoning Behavioral-based Worker Selection in Mobile Crowdsensing using Generative Adversarial Networks

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical vulnerability in behavior-based worker selection models for Mobile Crowd Sensing (MCS): susceptibility to stealthy adversarial attacks launched by malicious insiders. To this end, we propose the first Generative Adversarial Network (GAN)-based behavioral modeling poisoning attack, which synthesizes realistic, history-like worker behavioral sequences that evade detection by standard anomaly detectors and corrupt personalized behavior prediction models during training. Experiments on real-world datasets demonstrate that the attack reduces victim model accuracy by over 40%, bypasses mainstream anomaly detection mechanisms, and degrades task assignment rate and average worker earnings by up to 35%. This study is the first to systematically uncover a novel class of AI-driven worker selection vulnerabilities under insider threats, providing both a critical warning and an essential benchmark for evaluating and enhancing the robustness of MCS systems.

Technology Category

Application Category

📝 Abstract
With the widespread adoption of Artificial intelligence (AI), AI-based tools and components are becoming omnipresent in today's solutions. However, these components and tools are posing a significant threat when it comes to adversarial attacks. Mobile Crowdsensing (MCS) is a sensing paradigm that leverages the collective participation of workers and their smart devices to collect data. One of the key challenges faced at the selection stage is ensuring task completion due to workers' varying behavior. AI has been utilized to tackle this challenge by building unique models for each worker to predict their behavior. However, the integration of AI into the system introduces vulnerabilities that can be exploited by malicious insiders to reduce the revenue obtained by victim workers. This work proposes an adversarial attack targeting behavioral-based selection models in MCS. The proposed attack leverages Generative Adversarial Networks (GANs) to generate poisoning points that can mislead the models during the training stage without being detected. This way, the potential damage introduced by GANs on worker selection in MCS can be anticipated. Simulation results using a real-life dataset show the effectiveness of the proposed attack in compromising the victim workers' model and evading detection by an outlier detector, compared to a benchmark. In addition, the impact of the attack on reducing the payment obtained by victim workers is evaluated.
Problem

Research questions and friction points this paper is trying to address.

Adversarial attacks threaten AI-based worker selection in MCS.
GANs generate undetected poisoning points to mislead selection models.
Attack reduces victim workers' revenue by compromising behavioral models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using GANs to generate poisoning points
Targeting behavioral-based selection models
Evading detection by outlier detectors
🔎 Similar Papers
No similar papers found.