A framework for training and benchmarking algorithms that schedule robot tasks

📅 2024-08-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor adaptability and inconsistent evaluation of service robot task scheduling under uncertain human activity, this paper introduces the first benchmarking framework for dynamic scenarios. Methodologically, it establishes a standardized API, a parameterized ROS-based simulation environment, and a reproducible suite of scenarios—incorporating modeling of localization noise, mobile pedestrians, and object uncertainty—and defines a multi-objective evaluation protocol assessing latency, success rate, and robustness. Key contributions include: (i) the first statistically grounded evaluation paradigm enabling fair, cross-algorithm comparison; (ii) empirical validation across three representative tasks—patrolling, fall assistance, and pick-and-place—demonstrating improved algorithmic comparability and performance analysis; and (iii) full open-sourcing of code and scenarios to foster community-wide standardization and reproducibility.

Technology Category

Application Category

📝 Abstract
Service robots work in a changing environment habited by exogenous agents like humans. In the service robotics domain, lots of uncertainties result from exogenous actions and inaccurate localisation of objects and the robot itself. This makes the robot task scheduling problem challenging. In this article, we propose a benchmarking framework for systematically assessing the performance of algorithms scheduling robot tasks. The robot environment incorporates a map of the room, furniture, transportable objects, and moving humans. The framework defines interfaces for the algorithms, tasks to be executed, and evaluation methods. The system consists of several tools, easing testing scenario generation for training AI-based scheduling algorithms and statistical testing. For benchmarking purposes, a set of scenarios is chosen, and the performance of several scheduling algorithms is assessed. The system source is published to serve the community for tuning and comparable assessment of robot task scheduling algorithms for service robots. The framework is validated by assessment of scheduling algorithms for the mobile robot executing patrol, human fall assistance and simplified pick and place tasks.
Problem

Research questions and friction points this paper is trying to address.

Service Robots
Task Scheduling
Adaptability in Uncertain Environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robotics Algorithm Evaluation
Adaptive Test Scenarios
Service Robotics Optimization
🔎 Similar Papers
No similar papers found.