🤖 AI Summary
This work addresses the weak generalization capability of multi-objective reinforcement learning (MORL) in dynamic environments. We formally define the cross-environment MORL generalization task—highlighting that existing scalar-reward generalization studies neglect the intrinsic multi-objective nature and thus fail to characterize Pareto-optimal policy transfer under diverse preference specifications. To bridge this gap, we introduce the first parameterized, configurable MORL generalization benchmark suite, integrating environment diversity modeling and Pareto-frontier evaluation. We systematically evaluate mainstream MORL algorithms—including SMSAC and MOQL—and demonstrate their severe generalization deficiencies across environments. Empirical results confirm that explicit multi-objective modeling is essential for effective generalization. All code and benchmark components are publicly released, establishing a standardized infrastructure and empirical foundation for future MORL generalization research.
📝 Abstract
Real-world sequential decision-making tasks often require balancing trade-offs between multiple conflicting objectives, making Multi-Objective Reinforcement Learning (MORL) an increasingly prominent field of research. Despite recent advances, existing MORL literature has narrowly focused on performance within static environments, neglecting the importance of generalizing across diverse settings. Conversely, existing research on generalization in RL has always assumed scalar rewards, overlooking the inherent multi-objectivity of real-world problems. Generalization in the multi-objective context is fundamentally more challenging, as it requires learning a Pareto set of policies addressing varying preferences across multiple objectives. In this paper, we formalize the concept of generalization in MORL and how it can be evaluated. We then contribute a novel benchmark featuring diverse multi-objective domains with parameterized environment configurations to facilitate future studies in this area. Our baseline evaluations of state-of-the-art MORL algorithms on this benchmark reveals limited generalization capabilities, suggesting significant room for improvement. Our empirical findings also expose limitations in the expressivity of scalar rewards, emphasizing the need for multi-objective specifications to achieve effective generalization. We further analyzed the algorithmic complexities within current MORL approaches that could impede the transfer in performance from the single- to multiple-environment settings. This work fills a critical gap and lays the groundwork for future research that brings together two key areas in reinforcement learning: solving multi-objective decision-making problems and generalizing across diverse environments. We make our code available at https://github.com/JaydenTeoh/MORL-Generalization.