🤖 AI Summary
High-cost, expert-dependent human demonstration collection hinders scalable robot policy training. To address this, we propose RoboCrowd: a crowdsourced, in-situ demonstration collection framework deployed on the ALOHA dual-arm teleoperation platform within a real-world campus café environment. We introduce three novel incentive mechanisms tailored for robotic data collection—material rewards, intrinsic interest stimulation, and social comparison—integrated with gamified strategies including physical reward distribution, challenge-based task design, and real-time leaderboards. Within two weeks, RoboCrowd attracted over 200 participants who contributed more than 800 high-quality interaction segments. Using this crowd-sourced dataset as a pretraining corpus significantly improved downstream policy performance after expert fine-tuning (+20% success rate), empirically validating the feasibility and effectiveness of low-cost, large-scale, expert-free robotic demonstration acquisition.
📝 Abstract
In recent years, imitation learning from large-scale human demonstrations has emerged as a promising paradigm for training robot policies. However, the burden of collecting large quantities of human demonstrations is significant in terms of collection time and the need for access to expert operators. We introduce a new data collection paradigm, RoboCrowd, which distributes the workload by utilizing crowdsourcing principles and incentive design. RoboCrowd helps enable scalable data collection and facilitates more efficient learning of robot policies. We build RoboCrowd on top of ALOHA (Zhao et al. 2023) -- a bimanual platform that supports data collection via puppeteering -- to explore the design space for crowdsourcing in-person demonstrations in a public environment. We propose three classes of incentive mechanisms to appeal to users' varying sources of motivation for interacting with the system: material rewards, intrinsic interest, and social comparison. We instantiate these incentives through tasks that include physical rewards, engaging or challenging manipulations, as well as gamification elements such as a leaderboard. We conduct a large-scale, two-week field experiment in which the platform is situated in a university cafe. We observe significant engagement with the system -- over 200 individuals independently volunteered to provide a total of over 800 interaction episodes. Our findings validate the proposed incentives as mechanisms for shaping users' data quantity and quality. Further, we demonstrate that the crowdsourced data can serve as useful pre-training data for policies fine-tuned on expert demonstrations -- boosting performance up to 20% compared to when this data is not available. These results suggest the potential for RoboCrowd to reduce the burden of robot data collection by carefully implementing crowdsourcing and incentive design principles.