On Generalization Across Environments In Multi-Objective Reinforcement Learning

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the weak generalization capability of multi-objective reinforcement learning (MORL) in dynamic environments. We formally define the cross-environment MORL generalization task—highlighting that existing scalar-reward generalization studies neglect the intrinsic multi-objective nature and thus fail to characterize Pareto-optimal policy transfer under diverse preference specifications. To bridge this gap, we introduce the first parameterized, configurable MORL generalization benchmark suite, integrating environment diversity modeling and Pareto-frontier evaluation. We systematically evaluate mainstream MORL algorithms—including SMSAC and MOQL—and demonstrate their severe generalization deficiencies across environments. Empirical results confirm that explicit multi-objective modeling is essential for effective generalization. All code and benchmark components are publicly released, establishing a standardized infrastructure and empirical foundation for future MORL generalization research.

Technology Category

Application Category

📝 Abstract
Real-world sequential decision-making tasks often require balancing trade-offs between multiple conflicting objectives, making Multi-Objective Reinforcement Learning (MORL) an increasingly prominent field of research. Despite recent advances, existing MORL literature has narrowly focused on performance within static environments, neglecting the importance of generalizing across diverse settings. Conversely, existing research on generalization in RL has always assumed scalar rewards, overlooking the inherent multi-objectivity of real-world problems. Generalization in the multi-objective context is fundamentally more challenging, as it requires learning a Pareto set of policies addressing varying preferences across multiple objectives. In this paper, we formalize the concept of generalization in MORL and how it can be evaluated. We then contribute a novel benchmark featuring diverse multi-objective domains with parameterized environment configurations to facilitate future studies in this area. Our baseline evaluations of state-of-the-art MORL algorithms on this benchmark reveals limited generalization capabilities, suggesting significant room for improvement. Our empirical findings also expose limitations in the expressivity of scalar rewards, emphasizing the need for multi-objective specifications to achieve effective generalization. We further analyzed the algorithmic complexities within current MORL approaches that could impede the transfer in performance from the single- to multiple-environment settings. This work fills a critical gap and lays the groundwork for future research that brings together two key areas in reinforcement learning: solving multi-objective decision-making problems and generalizing across diverse environments. We make our code available at https://github.com/JaydenTeoh/MORL-Generalization.
Problem

Research questions and friction points this paper is trying to address.

Generalization across diverse environments in MORL
Evaluating MORL algorithms in multi-objective settings
Improving transfer from single- to multiple-environment settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizes generalization in Multi-Objective Reinforcement Learning
Introduces benchmark with diverse multi-objective domains
Analyzes algorithmic complexities in MORL approaches
🔎 Similar Papers
No similar papers found.