Mixed Reality Outperforms Virtual Reality for Remote Error Resolution in Pick-and-Place Tasks

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the scenario where warehouse robots encounter physical errors (e.g., package misalignment), necessitating remote operator intervention for pick-and-place tasks. It comparatively evaluates mixed reality (MR), virtual reality (VR), and conventional camera-stream interfaces in terms of task efficiency and usability. Using HoloLens 2 for spatially registered, real-virtual fusion projection anchored to a physical desktop, the evaluation integrates a simulated error platform, linear mixed-effects modeling, the System Usability Scale (SUS), and NASA-TLX for cognitive workload assessment. Results demonstrate—empirically for the first time—that MR significantly outperforms VR and video streaming: task completion time decreases by 27%, SUS scores improve by 31%, and cognitive workload reduces by 39%; all 21 participants unanimously preferred MR. The core contribution is the empirical validation of spatially anchored MR interfaces as uniquely effective for remote human-in-the-loop intervention in cyber-physical robotic systems.

Technology Category

Application Category

📝 Abstract
This study evaluates the performance and usability of Mixed Reality (MR), Virtual Reality (VR), and camera stream interfaces for remote error resolution tasks, such as correcting warehouse packaging errors. Specifically, we consider a scenario where a robotic arm halts after detecting an error, requiring a remote operator to intervene and resolve it via pick-and-place actions. Twenty-one participants performed simulated pick-and-place tasks using each interface. A linear mixed model (LMM) analysis of task resolution time, usability scores (SUS), and mental workload scores (NASA-TLX) showed that the MR interface outperformed both VR and camera interfaces. MR enabled significantly faster task completion, was rated higher in usability, and was perceived to be less cognitively demanding. Notably, the MR interface, which projected a virtual robot onto a physical table, provided superior spatial understanding and physical reference cues. Post-study surveys further confirmed participants' preference for MR over other interfaces.
Problem

Research questions and friction points this paper is trying to address.

Compares MR, VR, and camera for remote error resolution.
Assesses task time, usability, and mental workload in pick-and-place tasks.
Determines MR's superiority in speed, usability, and cognitive ease.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed Reality outperforms Virtual Reality
MR enables faster task completion
MR provides superior spatial understanding
🔎 Similar Papers
No similar papers found.
A
Advay Kumar
Faculty of Engineering, Monash University, Clayton, Australia
S
Stephanie Simangunsong
Faculty of Engineering, Monash University, Clayton, Australia
Pamela Carreno-Medrano
Pamela Carreno-Medrano
Monash University
Human-Robot InteractionHuman-Centered AIHuman-Motion AnalysisInteractive Learning
A
Akansel Cosgun
Faculty of Science Engineering and Built Environment, Deakin University, Melbourne, Australia