🤖 AI Summary
This work addresses the challenge of integrated intelligent control for assistive robots, which is hindered by the lack of multimodal synthetic data for coordinated control between wheelchairs and robotic arms. To bridge this gap, the authors propose the WheelArm integrated system concept and develop the WheelArm-Sim simulation framework, which embeds high-fidelity dynamic models of both a wheeled base and a robotic arm within Isaac Sim to synchronously capture multimodal sensor data. The framework yields the first multimodal synthetic dataset supporting joint navigation-and-manipulation tasks, comprising 13 distinct tasks, 232 trajectories, and 67,783 samples. The utility of this dataset is demonstrated through its effective support of action prediction in a mustard bottle grasping task.
📝 Abstract
Wheelchairs and robotic arms enhance independent living by assisting individuals with upper-body and mobility limitations in their activities of daily living (ADLs). Although recent advancements in assistive robotics have focused on Wheelchair-Mounted Robotic Arms (WMRAs) and wheelchairs separately, integrated and unified control of the combination using machine learning models remains largely underexplored. To fill this gap, we introduce the concept of WheelArm, an integrated cyber-physical system (CPS) that combines wheelchair and robotic arm controls. Data collection is the first step toward developing WheelArm models. In this paper, we present WheelArm-Sim, a simulation framework developed in Isaac Sim for synthetic data collection. We evaluate its capability by collecting a manipulation and navigation combined multimodal dataset, comprising 13 tasks, 232 trajectories, and 67,783 samples. To demonstrate the potential of the WheelArm dataset, we implement a baseline model for action prediction in the mustard-picking task. The results illustrate that data collected from WheelArm-Sim is feasible for a data-driven machine learning model for integrated control.