A Benchmarking Study of Vision-based Robotic Grasping Algorithms

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating vision-based robotic grasping algorithms under variable real-world conditions. We introduce the first large-scale, standardized benchmarking framework spanning diverse hardware platforms, environmental settings, and laboratories. We systematically evaluate four representative algorithms—two deep learning–based (e.g., GraspNet) and two analytical approaches—across seven perturbation dimensions (including illumination, background texture, camera noise, and gripper type), conducting 5,040 experiments in both simulation and on physical robot platforms, followed by multi-laboratory reproducibility validation. We publicly release all experimental videos and the complete benchmark software toolchain. Results reveal a significant performance gap of 23–41% between simulation and real-robot execution; background texture and camera noise emerge as the most critical factors affecting robustness. This work establishes a standardized evaluation paradigm and empirical foundation for fair algorithm comparison and reliable real-world deployment.

Technology Category

Application Category

📝 Abstract
We present a benchmarking study of vision-based robotic grasping algorithms with distinct approaches, and provide a comparative analysis. In particular, we compare two machine-learning-based and two analytical algorithms using an existing benchmarking protocol from the literature and determine the algorithm's strengths and weaknesses under different experimental conditions. These conditions include variations in lighting, background textures, cameras with different noise levels, and grippers. We also run analogous experiments in simulations and with real robots and present the discrepancies. Some experiments are also run in two different laboratories using same protocols to further analyze the repeatability of our results. We believe that this study, comprising 5040 experiments, provides important insights into the role and challenges of systematic experimentation in robotic manipulation, and guides the development of new algorithms by considering the factors that could impact the performance. The experiment recordings and our benchmarking software are publicly available.
Problem

Research questions and friction points this paper is trying to address.

Compare vision-based robotic grasping algorithms
Analyze algorithm performance under varied conditions
Provide insights for systematic robotic experimentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares machine-learning and analytical grasping algorithms
Tests under varied lighting, textures, and camera conditions
Conducts 5040 experiments across simulations and real robots
🔎 Similar Papers
No similar papers found.
B
Bharath K Rameshbabu
Robotics Engineering Department, Worcester Polytechnic Institute, Worcester, MA 01609, US
S
Sumukh S Balakrishna
Robotics Engineering Department, Worcester Polytechnic Institute, Worcester, MA 01609, US
Brian Flynn
Brian Flynn
New England Robotics Validation and Experimentation (NERVE) Center, University of Massachusetts Lowell, Lowell, MA, 01852, US
V
Vinarak Kapoor
Robotics Engineering Department, Worcester Polytechnic Institute, Worcester, MA 01609, US
Adam Norton
Adam Norton
New England Robotics Validation and Experimentation (NERVE) Center, University of Massachusetts Lowell
RoboticsHuman-Robot InteractionTest MethodsInterfaces
Holly Yanco
Holly Yanco
Professor of Computer Science, University of Massachusetts Lowell
Human-robot interactionRobotics
B
B. Çalli
Robotics Engineering Department, Worcester Polytechnic Institute, Worcester, MA 01609, US