MORAL: A Multimodal Reinforcement Learning Framework for Decision Making in Autonomous Laboratories

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses sequential decision-making for autonomous robotic agents in dynamic, real-world laboratory environments. We propose a vision-language early-fusion framework that jointly aligns visual features and semantic instruction embeddings using BLIP-2, and integrates the resulting multimodal representations into both DQN and PPO reinforcement learning architectures. Evaluated on the BridgeData V2 benchmark, our approach achieves a +20% improvement in task success rate over baselines, along with higher cumulative rewards and superior instruction-following fidelity—as quantified by BLEU, METEOR, and ROUGE-L scores—outperforming vision-only Transformer and RNN-based methods. Our key contribution is the first integration of BLIP-2’s semantic alignment mechanism into a closed-loop robotic sequential decision-making pipeline, demonstrating that linguistic priors significantly accelerate policy learning and enhance cross-task generalization.

Technology Category

Application Category

📝 Abstract
We propose MORAL (a multimodal reinforcement learning framework for decision making in autonomous laboratories) that enhances sequential decision-making in autonomous robotic laboratories through the integration of visual and textual inputs. Using the BridgeData V2 dataset, we generate fine-tuned image captions with a pretrained BLIP-2 vision-language model and combine them with visual features through an early fusion strategy. The fused representations are processed using Deep Q-Network (DQN) and Proximal Policy Optimization (PPO) agents. Experimental results demonstrate that multimodal agents achieve a 20% improvement in task completion rates and significantly outperform visual-only and textual-only baselines after sufficient training. Compared to transformer-based and recurrent multimodal RL models, our approach achieves superior performance in cumulative reward and caption quality metrics (BLEU, METEOR, ROUGE-L). These results highlight the impact of semantically aligned language cues in enhancing agent learning efficiency and generalization. The proposed framework contributes to the advancement of multimodal reinforcement learning and embodied AI systems in dynamic, real-world environments.
Problem

Research questions and friction points this paper is trying to address.

Enhancing decision-making in autonomous labs with multimodal inputs
Improving task completion rates using visual and textual data fusion
Advancing multimodal RL for embodied AI in dynamic environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal RL integrates visual and textual inputs
Early fusion combines BLIP-2 captions with visual features
DQN and PPO agents enhance decision-making performance
🔎 Similar Papers
No similar papers found.
N
Natalie Tirabassi
Department of Computer Science, Cleveland State University, Cleveland, OH USA
S
Sathish A. P. Kumar
Department of Computer Science, Cleveland State University, Cleveland, OH USA
Sumit Kumar Jha
Sumit Kumar Jha
University of Florida
Arvind Ramanathan
Arvind Ramanathan
Argonne National Laboratory
Machine LearningComputational BiologyMolecular biophysicsenzyme catalysishigher-order statistics