π€ AI Summary
This work addresses the challenge of high-precision localization for mobile robots operating in narrow, cluttered, or low-visibility environments where visual and long-range sensing are severely limited. The authors propose a non-visual localization method based on a single biomimetic whisker sensor. By introducing a virtual sensor model and the concept of pre-image, the robotβs pose is mapped into the whisker observation space, enabling iterative estimation of contact points and reconstruction of obstacle boundaries through the robotβs motion model. This framework supports structured reasoning over state uncertainty and is compatible with both deterministic and possibilistic inference, without relying on any specific physical sensor implementation. Experimental results in both simulation and real-world platforms demonstrate localization errors below 7 mm, confirming the efficacy of whisker-based sensing as a lightweight and highly adaptable complement to conventional navigation systems.
π Abstract
Whisker-like touch sensors offer unique advantages for short-range perception in environments where visual and long-range sensing are unreliable, such as confined, cluttered, or low-visibility settings. This paper presents a framework for estimating contact points and robot localization in a known planar environment using a single whisker sensor. We develop a family of virtual sensor models. Each model maps robot configurations to sensor observations and enables structured reasoning through the concept of preimages - the set of robot states consistent with a given observation. The notion of virtual sensor models serves as an abstraction to reason about state uncertainty without dependence on physical implementation. By combining sensor observations with a motion model, we estimate the contact point. Iterative estimation then enables reconstruction of obstacle boundaries. Furthermore, intersecting states inferred from current observations with forward-projected states from previous steps allow accurate robot localization without relying on vision or external systems. The framework supports both deterministic and possibilistic formulations and is validated through simulation and physical experiments using a low-cost, 3D printed, Hall-effect-based whisker sensor. Results demonstrate accurate contact estimation and localization with errors under 7 mm, demonstrating the potential of whisker-based sensing as a lightweight, adaptable complement to vision-based navigation.