Adversarial VR: An Open-Source Testbed for Evaluating Adversarial Robustness of VR Cybersickness Detection and Mitigation

๐Ÿ“… 2025-12-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing deep learningโ€“based motion sickness detection and adaptive mitigation systems are vulnerable to adversarial attacks, leading to false detections, erroneous interventions, and degraded immersion; moreover, no open-source, real-time, end-to-end robustness evaluation platform exists. Method: We propose the first open-source VR testing platform integrating real-world eye-tracking and motion sensor inputs, a dynamic visual tunneling mitigation mechanism, and multiple adversarial attack methods (MI-FGSM, PGD, C&W), implemented in Unity with HTC Vive Pro Eye hardware. Contribution/Results: Experiments demonstrate that C&W attacks degrade Transformer-based detection accuracy by 5.94ร—, and all attacks successfully disable the mitigation functionality. The platform establishes a reproducible robustness benchmark, addressing a critical gap in standardized evaluation tools for VR motion sickness systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep learning (DL)-based automated cybersickness detection methods, along with adaptive mitigation techniques, can enhance user comfort and interaction. However, recent studies show that these DL-based systems are susceptible to adversarial attacks; small perturbations to sensor inputs can degrade model performance, trigger incorrect mitigation, and disrupt the user's immersive experience (UIX). Additionally, there is a lack of dedicated open-source testbeds that evaluate the robustness of these systems under adversarial conditions, limiting the ability to assess their real-world effectiveness. To address this gap, this paper introduces Adversarial-VR, a novel real-time VR testbed for evaluating DL-based cybersickness detection and mitigation strategies under adversarial conditions. Developed in Unity, the testbed integrates two state-of-the-art (SOTA) DL models: DeepTCN and Transformer, which are trained on the open-source MazeSick dataset, for real-time cybersickness severity detection and applies a dynamic visual tunneling mechanism that adjusts the field-of-view based on model outputs. To assess robustness, we incorporate three SOTA adversarial attacks: MI-FGSM, PGD, and C&W, which successfully prevent cybersickness mitigation by fooling DL-based cybersickness models' outcomes. We implement these attacks using a testbed with a custom-built VR Maze simulation and an HTC Vive Pro Eye headset, and we open-source our implementation for widespread adoption by VR developers and researchers. Results show that these adversarial attacks are capable of successfully fooling the system. For instance, the C&W attack results in a $5.94x decrease in accuracy for the Transformer-based cybersickness model compared to the accuracy without the attack.
Problem

Research questions and friction points this paper is trying to address.

Evaluates adversarial robustness of VR cybersickness detection and mitigation systems.
Addresses lack of open-source testbeds for assessing real-world effectiveness under attacks.
Investigates how adversarial perturbations degrade model performance and disrupt user experience.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source VR testbed for adversarial robustness evaluation
Integrates DeepTCN and Transformer models with visual tunneling
Implements MI-FGSM, PGD, and C&W attacks to assess vulnerabilities
๐Ÿ”Ž Similar Papers
No similar papers found.