🤖 AI Summary
This paper addresses the low fusion efficiency and weak representational capacity of multi-pretrained-model features in reinforcement learning. We propose a weight-sharing attention mechanism to enable efficient, cooperative fusion across model embeddings. During state encoding, our method uniformly models hidden representations from diverse pretrained models (e.g., ViT, ResNet), eliminating redundant parameters and computational overhead. Our key contribution is the first incorporation of weight sharing into multi-source feature fusion—striking a balance between representation richness and computational efficiency—alongside a systematic analysis of how model scale affects generalization performance. Evaluated on the Atari benchmark, our approach achieves performance comparable to end-to-end training while significantly reducing reliance on any single base model. Results demonstrate its effectiveness, robustness, and scalability.
📝 Abstract
The recent focus and release of pre-trained models have been a key components to several advancements in many fields (e.g. Natural Language Processing and Computer Vision), as a matter of fact, pre-trained models learn disparate latent embeddings sharing insightful representations. On the other hand, Reinforcement Learning (RL) focuses on maximizing the cumulative reward obtained via agent's interaction with the environment. RL agents do not have any prior knowledge about the world, and they either learn from scratch an end-to-end mapping between the observation and action spaces or, in more recent works, are paired with monolithic and computationally expensive Foundational Models. How to effectively combine and leverage the hidden information of different pre-trained models simultaneously in RL is still an open and understudied question. In this work, we propose Weight Sharing Attention (WSA), a new architecture to combine embeddings of multiple pre-trained models to shape an enriched state representation, balancing the tradeoff between efficiency and performance. We run an extensive comparison between several combination modes showing that WSA obtains comparable performance on multiple Atari games compared to end-to-end models. Furthermore, we study the generalization capabilities of this approach and analyze how scaling the number of models influences agents' performance during and after training.