๐ค AI Summary
This work addresses the frequent collisions in redirected walking (RDW) caused by geometric incompatibilities between physical and virtual environments, a challenge inadequately tackled by existing scene generation methods that lack explicit modeling of physical feasibility. To this end, we propose HCVR, a novel framework that explicitly optimizes physicalโvirtual spatial compatibility as a core objective. HCVR introduces the boundary-aware ENI++ metric and leverages a large language model for context-aware 3D asset retrieval and layout generation, strategically selecting, scaling, and placing objects to occlude virtual regions incompatible with the physical space. Experimental results demonstrate that HCVR reduces physical collisions by a factor of 22.78 compared to conventional LLM+RDW approaches, lowers ENI++ scores by 35.89%, and improves user-reported layout satisfaction by 12.5%.
๐ Abstract
Natural walking enhances immersion in virtual environments (VEs), but physical space limitations and obstacles hinder exploration, especially in large virtual scenes. Redirected Walking (RDW) techniques mitigate this by subtly manipulating the virtual camera to guide users away from physical collisions within pre-defined VEs. However, RDW efficacy diminishes significantly when substantial geometric divergence exists between the physical and virtual environments, leading to unavoidable collisions. Existing scene generation methods primarily focus on object relationships or layout aesthetics, often neglecting the crucial aspect of physical compatibility required for effective RDW. To address this, we introduce HCVR (High Compatibility Virtual Reality Environment Generation), a novel framework that generates virtual scenes inherently optimized for alignment-based RDW controllers. HCVR first employs ENI++, a novel, boundary-sensitive metric to evaluate the incompatibility between physical and virtual spaces by comparing rotation-sensitive visibility polygons. Guided by the ENI++ compatibility map and user prompts, HCVR utilizes a Large Language Model (LLM) for context-aware 3D asset retrieval and initial layout generation. The framework then strategically adjusts object selection, scaling, and placement to maximize coverage of virtually incompatible regions, effectively guiding users towards RDW-feasible paths. User studies evaluating physical collisions and layout quality demonstrate HCVR's effectiveness with HCVR-generated scenes, resulting in 22.78 times fewer physical collisions and received 35.89\% less on ENI++ score compared to LLM-based generation with RDW, while also receiving 12.5\% higher scores on user feedback to layout design.