🤖 AI Summary
This study investigates how input devices and hand visualization modalities in virtual reality (VR) influence user workload and task performance, thereby affecting training and demonstration efficacy. Three interaction configurations—motion-capture gloves, hand-visualized controllers, and standard controllers—were compared during everyday kitchen tasks to assess differences in user experience and operational behavior. Using the System Usability Scale (SUS), NASA-TLX, and trajectory segmentation analysis, the study quantified task efficiency, precision, and cognitive load. Results indicate that controllers enable faster and more stable performance in pick-and-place tasks, whereas gloves afford greater naturalness in action-oriented tasks like cutting but exhibit higher variability. No significant differences emerged in overall usability or cognitive workload across conditions. The findings highlight a trade-off between interaction efficiency and naturalism, suggesting that VR interaction design should be tailored to task type—such as pick-and-place versus action-oriented activities.
📝 Abstract
Virtual Reality (VR) is increasingly used for training and demonstration purposes including a variety of applications ranging from robot learning to rehabilitation. However, the choice of input device and its visualization might influence workload and thus user performance leading to suboptimal demonstrations or reduced training effects. This study investigates how different VR input configurations - motion capture gloves, controllers with hand visualization, and controllers with controller visualization - affect user experience and task execution, with the goal of identifying which configuration is best suited for which type of task. Participants performed various kitchen-related activities of daily living (ADLs), including object placement, cutting, cleaning, and pouring in a simulated environment. To address two research questions, we evaluated user experience using the System Usability Scale and NASA Task Load Index (RQ1), and task-specific interaction behavior (RQ2). The latter was assessed using trajectory segmentation, analyzing movement efficiency, unnecessary actions, and execution precision. While no significant differences in overall usability and workload were found, trajectory analysis revealed configuration-specific execution behaviors with different movement strategies. Controllers enabled significantly faster task completion with less movement variability in pick-and-place style tasks such as table setting. In contrast, motion capture gloves produced more natural movements with fewer unnecessary actions, but also showed greater variance in movement patterns for manner-oriented tasks such as cutting bread. These findings highlight trade-offs between efficiency and naturalism, and have implications for optimizing VR-based training, improving the quality of user-generated demonstrations, and tailoring interaction design to specific application goals.