🤖 AI Summary
This work addresses the low accuracy and poor stability of object hardness classification using vision-tactile sensors under few-shot conditions. We propose an information-theoretic active sampling method that, for the first time, incorporates model uncertainty—quantified via entropy and confidence—into a vision-tactile hardness classification framework. Our approach jointly leverages probabilistic classifiers (logistic regression, random forest, and neural networks) to enable adaptive sample selection. The method significantly improves sample efficiency, achieving a mean classification accuracy of 88.78% on the same object set—substantially outperforming human judgment (48.00%) and random sampling baselines. Results demonstrate strong effectiveness and robustness in low-data regimes. The core contribution is the establishment of an uncertainty-guided active learning paradigm for vision-tactile perception, offering a novel and resource-efficient solution for tactile sensing tasks under data scarcity.
📝 Abstract
One of the most important object properties that humans and robots perceive through touch is hardness. This paper investigates information-theoretic active sampling strategies for sample-efficient hardness classification with vision-based tactile sensors. We evaluate three probabilistic classifier models and two model-uncertainty-based sampling strategies on a robotic setup as well as on a previously published dataset of samples collected by human testers. Our findings indicate that the active sampling approaches, driven by uncertainty metrics, surpass a random sampling baseline in terms of accuracy and stability. Additionally, while in our human study, the participants achieve an average accuracy of 48.00%, our best approach achieves an average accuracy of 88.78% on the same set of objects, demonstrating the effectiveness of vision-based tactile sensors for object hardness classification.