🤖 AI Summary
Existing shared-control frameworks for omnidirectional smart electric wheelchairs suffer from unintuitive user interaction and underutilization of full omnidirectional mobility. To address this, we propose the first reinforcement learning (RL)-based shared-control framework. Trained in Isaac Gym and validated in Gazebo simulation, it jointly optimizes collision avoidance, orientation adaptability, motion smoothness, and user cognitive load and comfort—enabling intuitive mapping from user inputs to 3D omnidirectional motion. Compared to conventional approaches, our policy significantly reduces cognitive load, ensures collision-free navigation, yields more natural orientation adjustments, and achieves comparable or superior motion smoothness. Crucially, this work constitutes the first successful deployment and real-world validation of an RL-driven shared-control policy on a physical omnidirectional wheelchair platform. It establishes a scalable, autonomous–user collaborative paradigm for intelligent mobility aids.
📝 Abstract
Smart electric wheelchairs can improve user experience by supporting the driver with shared control. State-of-the-art work showed the potential of shared control in improving safety in navigation for non-holonomic robots. However, for holonomic systems, current approaches often lead to unintuitive behavior for the user and fail to utilize the full potential of omnidirectional driving. Therefore, we propose a reinforcement learning-based method, which takes a 2D user input and outputs a 3D motion while ensuring user comfort and reducing cognitive load on the driver. Our approach is trained in Isaac Gym and tested in simulation in Gazebo. We compare different RL agent architectures and reward functions based on metrics considering cognitive load and user comfort. We show that our method ensures collision-free navigation while smartly orienting the wheelchair and showing better or competitive smoothness compared to a previous non-learning-based method. We further perform a sim-to-real transfer and demonstrate, to the best of our knowledge, the first real-world implementation of RL-based shared control for an omnidirectional mobility platform.