PREVENT: Proactive Risk Evaluation and Vigilant Execution of Tasks for Mobile Robotic Chemists using Multi-Modal Behavior Trees

πŸ“… 2025-10-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Mobile chemical robots suffer from workflow interruptions due to undetected fine-grained anomalies (e.g., unclosed reagent bottle caps), leading to resource waste and safety hazards; existing perception methods exhibit high false-positive rates, necessitating frequent human intervention and undermining autonomy. Method: We propose a multimodal behavior-tree–based framework for proactive risk assessment and task execution, integrating dexterous vision, navigation vision, and IoT-based gas sensing to establish a hierarchical perception architecture. Contribution/Results: The framework achieves zero false positives and zero false negatives in anomaly detection, enabling real-time, feedback-driven autonomous decision-making. Evaluated in simulated chemical experimentation scenarios, it significantly improves task deployment accuracy, eliminates both missed detections and spurious halts, and outperforms unimodal baselines. This work establishes a safe, robust system paradigm for autonomous chemical experimentation in hazardous environments.

Technology Category

Application Category

πŸ“ Abstract
Mobile robotic chemists are a fast growing trend in the field of chemistry and materials research. However, so far these mobile robots lack workflow awareness skills. This poses the risk that even a small anomaly, such as an improperly capped sample vial could disrupt the entire workflow. This wastes time, and resources, and could pose risks to human researchers, such as exposure to toxic materials. Existing perception mechanisms can be used to predict anomalies but they often generate excessive false positives. This may halt workflow execution unnecessarily, requiring researchers to intervene and to resume the workflow when no problem actually exists, negating the benefits of autonomous operation. To address this problem, we propose PREVENT a system comprising navigation and manipulation skills based on a multimodal Behavior Tree (BT) approach that can be integrated into existing software architectures with minimal modifications. Our approach involves a hierarchical perception mechanism that exploits AI techniques and sensory feedback through Dexterous Vision and Navigational Vision cameras and an IoT gas sensor module for execution-related decision-making. Experimental evaluations show that the proposed approach is comparatively efficient and completely avoids both false negatives and false positives when tested in simulated risk scenarios within our robotic chemistry workflow. The results also show that the proposed multi-modal perception skills achieved deployment accuracies that were higher than the average of the corresponding uni-modal skills, both for navigation and for manipulation.
Problem

Research questions and friction points this paper is trying to address.

Addresses mobile robotic chemists' lack of workflow awareness
Reduces false positives and negatives in anomaly detection
Enhances autonomous decision-making using multimodal perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Behavior Trees for robotic task execution
Hierarchical perception with AI and sensory feedback
Integration of vision cameras and IoT gas sensors
πŸ”Ž Similar Papers
No similar papers found.