๐ค AI Summary
Traditional interaction methods in CAVE systems induce unnatural motion, severely degrading immersion and sense of self-presence while provoking cybersickness. To address this, we propose a novel full-body motionโdriven virtual locomotion framework. Our approach uniquely integrates Perspective-n-Point (PnP)-based dynamic camera calibration with a lightweight deep learning model for real-time human pose estimation, enabling high-accuracy, low-latency mapping from user motion to virtual displacement. Furthermore, we optimize the synchronization between motion capture and real-time rendering to enhance kinesthetic fidelity within four-wall projection environments. Experimental evaluation demonstrates that our framework significantly improves usersโ self-presence and perceived motion naturalness compared to baseline techniques, while reducing cybersickness incidence by over 30%.
๐ Abstract
Cave Automatic Virtual Environment (CAVE) is one of the virtual reality (VR) immersive devices currently used to present virtual environments. However, the locomotion methods in the CAVE are limited by unnatural interaction methods, severely hindering the user experience and immersion in the CAVE. We proposed a locomotion framework for CAVE environments aimed at enhancing the immersive locomotion experience through optimized human motion recognition technology. Firstly, we construct a four-sided display CAVE system, then through the dynamic method based on Perspective-n-Point to calibrate the camera, using the obtained camera intrinsics and extrinsic parameters, and an action recognition architecture to get the action category. At last, transform the action category to a graphical workstation that renders display effects on the screen. We designed a user study to validate the effectiveness of our method. Compared to the traditional methods, our method has significant improvements in realness and self-presence in the virtual environment, effectively reducing motion sickness.